back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [ 44 ] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


44

- /2

/rrss

~ vurss;

Hence

-2 log> = n(log, rrss - log, urss)

and we use this as a x with I d.f. In the case of our simple regression model, this test might sound complicated. But the point is that if we want to test the hypothesis p = 0, note that

rrss =

urss = Syy(\ - r)

Hence -2 log = - log, (1 - r) = n log, [1/(1 - r)]. This we use as x with 1 d.f. Of course, in the simple regression model we would not be using this test, but the lr test is applicable in a very wide class of situations and is used in nonlinear models where small sample tests are not available.

(6) The Wald and Lagrangian Multiplier Tests

There are two other commonly used large sample tests that are based on the ML method: the W (Wald) test and the LM (Lagrangian multiplier) test. We will derive the expressions for these test statistics in the case of the simple regression model. Note that the r-test for the hypothesis p = 0 is based on the statistic

SE(p)

and in deriving SE(P) we use an unbiased estimator for cr. Instead, suppose that we use the ML estimator = RSS/ that we derived earlier. Then we get the Wald test. Note that since this is a large sample test we use the standard normal distribution, or the x-distribution if we consider the squared test statistic. Thus

of using - 2 log,,\ as a with d.f. k, where is the number of restrictions. Note that it is log to the base e (natural logarithm).

In our least squares model, suppose that we want to test the hypothesis p = 0. What we do is we obtain

urss = unrestricted residual sum of squares

rrss = restricted residual sum of squares

As derived in the preceding section, we have the unrestricted maximum of the likelihood function = aURSS)- and the restricted maximum = c(rrss)-"2 Thus



estimate of var (0) which we use as with 1 d.f. Estimate of var ( ) = 6-V5„, where

Noting that p = S/S„ we get, on simplification.

For the LM test we use the restricted residual sum of squares (i.e., residual sum of squares with = 0). This is nothing but Sy and = Syy/n. Thus, the LM test statistic is

LM = nr

which again has a x-distribution with a l.d. In summary, we have

LR =

n log(l/(l - r)

LM =

Each has a x-distribution with 1 d.f. These simple formulae are useful to remember and will be used in subsequent chapters. There is an interesting relationship between these test statistics that is valid for linear regression models: This is

WLRLM

Note that Win = r/d - t). Hence, LMin = = (Wln)l{\ -t- Win). Also, LRl n = log(I/(l - r)) = log(l + win). For X > 0, there is a famous inequality

X > log,(l + jt) s xlil + ).

Substituting JC = Win we get

W LR LM

- > - >-

n n n

WLRLM

What this suggests is that a hypothesis can be rejected by the W test but not rejected by the LM test. An example is provided in Section 4.12.

The LR test was suggested by Neyman and Pearson in 1928. The W test was suggested by Abraham Wald in 1943. The LM test was suggested by C. R. Rao in 1948 but the name Lagrangian Multiplier test was given in 1959 by S. D. Silvey. The test should more appropriately be called "Raos Score Test" but



"There is a lot of literature on the W, LR and LM tests. For a survey, see R. F. Engle, "Wald, Likelihood Ratio, and Lagrange Multiplier Tests in Econometrics," in Z. Griliches and M. D. Intrilligator (eds.), Handbooli of Econometrics, Vol. 2 (North Holland Publishing Co., 1984). *This geometric interpretation is from A. R. Pagan, "Reflections on Australian Macro Modelling," Working Paper, Australian National University, September 1981.

since the "LM test" terminology is more common in econometrics, we shall use it here. The inequality between the test statistics was first pointed out by Berndt and Savin in 1977.

For the LR test we need the ML estimates from both the restricted and unrestricted maximization of the likelihood function. For the W test we need only the unrestricted ML estimates. For the LM test we need only the restricted ML estimates. Since the last estimates are the easiest to obtain, the LM test is very popular in econometric work."

(7) Intuition Behind the LR, W, and LM Tests

The three tests described here are all based on ML estimation. Before we discuss their interrelationships, we present a few results in the theory of ML estimation.

1. log L/ae is called the score function and is denoted by 5( ). The ML estimator of is obtained by solving 5(0) = 0.

2. The quantity £[(-a log is called the information on 0 in the sample and is denoted by 7(0). Intuitively speaking, the second derivative measures the curvature of the function (in this case the likelihood function). The sharper the peak is, the more the information in the sample on e. If the likelihood function is relatively flat at the top, this means that many values of 9 are all almost equally likely; that is, there is no information on 0 in the sample.

3. The expression 7(0) plays a central role in the theory of ML estimation. It has been proved that under fairly general conditions, the ML estimator ¸ is consistent and asymptotically normally distributed with variance [/(0)]". This quantity is also called the information limit to the variance or the Cramer-Rao lower bound for the variance of the estimator 0. It is called the lower bound because it has been shown that the variance of any other consistent estimator is not less than this; that is, the ML estimator has the least variance.

4. In practice, we estimate 7(9) by (- log L/a9), that is, omitting the expectation part. Since the derivative of a function at a point is the slope of the tangent to that function at that point, (- log LldQ) is the slope of the score function. The question is: At what point is this slope calculated? For the W test, this slope is calculated at the point given by the ML estimate 9. For the LM test, it is calculated at the point 0 = 6o specified by the null hypothesis.

We can now show the relationship between the LR, W, and the LM tests geometrically.* Consider testing the null hypothesis : 9 = 0o.



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [ 44 ] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]