back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [ 78 ] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


78

SUMMARY • 223

The Test

The PE test* also uses artificial regressions. It involves only two steps. Step 1 is the same as in the BM test.

Step 2. Test ©0 = 0 and 6, = 0 in the artificial regressions:

logy, = Po + + US, - expOogy,)] + e,

y, = Po + + e,[logy, - logy,] + e,.

There are many other tests for this problem of choosing linear versus log-linear forms." The three tests mentioned here are the easiest to compute. We are not presenting an illustrative example here. The computation of the Box-Cox, BM, and PE tests for the data in Tables 4.7 and 5.5 is left as an exercise.

Summary

1. If the error variance is not constant for all the observations, this is known as the heteroskedasticity problem. The problem is informally illustrated with an example in Section 5.1.

2. First, we would like to know whether the problem exists. For this purpose some tests have been suggested. We have discussed the following tests:

(a) Ramseys test.

(b) Glejsers tests.

(c) Breusch and Pagans test.

(d) Whites test.

(e) Goldfeld and Quandts test.

(f) Likelihood ratio test.

Some of these tests have been illustrated with examples (see Section 5.2). Others have been left as exercises. There are two data sets (Tables 4.7 and 5.5) that have been provided for use by students who can experiment with these tests.

3. The consequences of the heteroskedasticity problem are

(a) The least squares estimators are unbiased but inefficient.

(b) The estimated variances are themselves biased.

"J. G. Mackinnon, H. White, and R. Davidson, "Tests for Model Specification in the Presence of Alternative Hypotheses: Some Further Results," Journal of Econometrics, Vol. 21, 1983, pp. 53-70.

"For instance, L. G. Godfrey and M. R. Wickens, "Testing Linear and Log-Linear Regressions for Functional Form," Review of Economic Studies, 1981, pp. 487-496, and R. Davidson and J. G. Mackinnon, "Testing Linear and Log-Linear Regressions Against Box-Cox Alternatives," Canadian Journal of Economics, 1985, pp. 499-517.



Exercises

1. Define the terms "heteroskedasticity" and "homoskedasticity." Explain the effects of heteroskedasticity on the estimates of the parameters and their variances in a normal regression model.

2. Explain the following tests for homoskedasticity.

(a) Ramseys test.

(b) Goldfeld and Quandts test.

(c) Glejsers test.

(d) Breusch and Pagans test.

Illustrate each of these tests with the data in Tables 4.7 and 5.5.

3. Indicate whether each of the following statements is true (T), false (F), or uncertain (U), and give a brief explanation.

(a) Heteroskedasticity in the errors leads to biased estimates of the regression coefficients and their standard errors.

(b) Deflating income and consumption by the same price results in a higher estimate for the marginal propensity to consume.

If the heteroskedasticity problem is detected, we can try to solve it by the use of weighted least squares. Otherwise, we can at least try to correct the error variances (since the estimators are unbiased). This correction (due to White) is illustrated at the end of Section 5.3.

4. There are three solutions commonly suggested for the heteroskedasticity problem:

(a) Use of weighted least squares.

(b) Deflating the data by some measure of "size."

(c) Transforming the data to the logarithmic form.

In weighted least squares, the particular weighting scheme used will depend on the nature of heteroskedasticity. Weighted least squares methods are illustrated in Section 5.4.

5. The use of deflators is similar to the weighted least squares method, although it is done in a more ad hoc fashion. Some problems with the use of deflators are discussed in Section 5.5.

6. The question of estimation in linear versus logarithmic form has received considerable attention during recent years. Several statistical tests have been suggested for testing the linear versus logarithmic form. In Section 5.6 we discuss three of these tests: the Box-Cox test, the BM test, and the PE test. All are easy to implement with standard regression packages. We have not illustrated the use of these tests. This is left as an exercise. It would be interesting to see which functional form is chosen and whether the heteroskedasticity problem exists for the functional form chosen.

7. Note that the tests discussed in Section 5.6 start by assuming homoske-dastic errors for both functional forms.



( ) The correlation between two ratios which have the same denominator is always biased upward.

4. Apply the following tests to choose between the linear and log-linear regression models with the data in Tables 4.7 and 5.5.

(a) Box-Cox test.

(b) BMtest.

(c) PEtest.

5. In the model

I, = OiiXi, + 0,2X2, + „ 2, = 0211» + / + "2/

you are told that

a„ -I- a,2 = «21 a„ - a,2 = 22

M„ ~ IN(0, cf), U2, ~ IN(0, 4ct2), and „ and U2, are independent. Explain how you will estimate the parameters a,„ 0,2, 021. «22, and cr.

6. Explain how you will choose among the following four regression models.

= a, -b p,x -b M,

= «2 + P2 log X -b U2

log = a-, -b P3X -b Mj

log = «4 + P4 log X + m4

7. In the linear regression model

y, = a + x, + u,

the errors u, are presumed to have a variance depending on a variable z,. Explain how you will choose among the following four specifications.

var(M,) = i, var(M,) = a\

var(M,) = a var(M,) = ah]

8. In a study of 27 industrial establishments of varying size, = the number of supervisors and x = the number of supervised workers, varies from 30 to 210 and X from 247 to 1650. The results obtained were as follows:

Variable

Coefficient

0.115

0.011

9.30

Constant

14.448

9.562

1.51

n = 27

s = 21.73

/?2 = 0.776

After the estimation of the equation and plotting the residuals against x, it was found that the variance of the residuals increased with x. Plotting the



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [ 78 ] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]