back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [ 130 ] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


130

9.8 THE LIMITED-INFORMATION MAXIMUM LIKELIHOOD METHOD 3 g J

This is because, at the level of individuals, consumption is determined by income y. At the back of our minds there is a causal relationship from to at the micro level, and we carry this to the macro level as well.

In an exactly identified system normalization does not matter. To see this, consider the first equation in (9.8). We saw earUer that we use Zi, Z2. Zi as instrumental variables. Since there is no problem of weighting these instrumental variables, it does not matter how the equation is normaUzed. On the other hand, for the second equation in (9.8) we have the choice between Z\ and Zt as instruments and the optimum instrumental variable is a weighted average of Z\ and Z2. We saw that these weights were obtained from the reduced-form equation for y,. On the other hand, if the equation is normalized with respect to y, and written as

y, = + c3z3 + «2

then the weights for Z and Z2 are obtained from the reduced-form equations for 2. Thus 2SLS and IV estimators are different for different normalizations in over-identified systems.

*9.8 The Limited-Information Maximum Likelihood Method

The LIML method, also known as the least variance ratio (LVR) method is the first of the single-equation methods suggested for simultaneous-equations models. It was suggested by Anderson and Rubin in 1949 and was popular until the advent of the 2SLS introduced by TheiF in the late 1950s. The LIML method is computationally more cumbersome, but for the simple models we are considering, it is easy to use. Consider again the equations in (9.8). We can write the first equation as

yi = )i - = c,z, + c2z2 + (9.17)

For each fe, we can construct a y\. Consider a regression of y\ on Z\ and Z2 only and compute the residual sum of squares (which will be a function of 4»,). Call it RSS,. Now consider a regression of yJ on all the exogenous variables z,, Z2. z3 and compute the residual sum of squares. Call it RSS,. What equation (9.17) says is that z3 is not important in determining y\. Thus the extra reduction in RSS by adding z3 should be minimal. The LIML or LVR method says that we should choose fc, so that (RSS, - RSS2)/RSS, or RSS,/RSS2 is minimized. After

T. W. Anderson and H. Rubin, "Estimation of the Parameters of a Single Equation in a Complete System of Stochastic Equations," Annals of Mathematical Statistics, Vol. 20, No. I, March 1949.

H. Theil, Economic Forecasts and Policy (Amsterdam: North-Holland, 1958).



bx is determined, the estimates of c, and Cj are obtained by regressing yj on Zi and Zz- The procedure is similar for the second equation in (9.8).

There are some important relationships between the LIML and 2SLS methods. (We will omit the proofs, which are beyond the scope of this book.)

1. The 2SLS method can be shown to minimize the difference (RSS, - RSS2), whereas the LIML minimizes the ratio (RSSi/RSSj).

2. If the equation under consideration is exactly identified, then 2SLS and LIML give identical estimates.

3. The LIML estimates are invariant to normalization.

4. The asymptotic variances and covariances of the LIML estimates are the same as those of the 2SLS estimates. However, the standard errors will differ because the error variance al is estimated from different estimates of the structural parameters.

5. In the computation of LIML estimates we use the variances and covariances among the endogenous variables as well. But the 2SLS estimates do not depend on this information. For instance, in the 2SLS estimation of the first equation in (9.8), we regress y, on , z„ and Zi- Since is a linear function of the zs we do not make any use of cov(y,, ). This covariance is used only in the computation of of.

Illustrative Example

Consider the demand and supply model of the Australian wine industry discussed in Section 9.5. Since the demand function is exactly identified, the 2SLS and LIML estimates would be identical and they are also identical to the instrumental variable estimates (using S as an instrument) presented earlier in Section 9.5.

As for the supply function, since it is over-identified, the 2SLS and LIML estimates will be different. The following are the 2SLS and LIML estimates of the parameters of the supply function (as computed from the SAS program). All variables are in logs:

2SLS

LIML

Variable

Coefficient

t-Ratio

Coefficient

t-Ratio

Intercept

-16.820 2.616 1.188

1.080 0.331 0.190

-15.57 7.89 6.24

- 16.849 2.627 1.183

1.087 0.334 0.191

-15.49 7.86 6.18

(as computed by the SAS program)

0.9548

0.9544

Actually, in this particular example, the LIML and 2SLS estimates are not much different. But this is not usually the experience in many studies of over-identified models.



*9.9 On the Use of OLS In the Estimation of Simultaneous-Equations Models

Although we know that the simultaneity problem results in inconsistent estimators of the parameters, when the structural equations are estimated by ordinary least squares (OLS), this does not mean that OLS estimation of simultaneous-equations models is useless. In some instances we may be able to say something about the direction of the (large-sample) bias, and this would be useful information. Also, if an equation is under-identified, it does not necessarily mean that nothing can be said about the parameters in that equation. Consider the demand and supply model:

q, = , + , q, = , + V,

demand function supply function

q, and p, are in log form and are measured in deviations from their means. Thus and a are the price elasticities of demand and supply respectively. Let var(m,) = dl, var (v,) = al, and cov(«„ v,) = or,„. The OLS estimator of p is

= +

(l/n) 2 p,u,

(l/«) S P] where n is the sample size. Now

Pp, + m, = ap, + V,

V, - ,

P, =

- a

Hence we have

plim 0 Yj = cov(p„ «,) =

plim (- P?) = var(p,) = \n /

P - a

or? + 0-» - 2ct„, (P - Oif

Thus

plimp = p + (p - a) «

al + al - la„



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [ 130 ] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]