back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [ 118 ] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


118

Variable

Constant

8.203

9.102

(0 091)

(0.026)

Education

0.010

0.015

(0 006)

(0 007)

0.002

0.006

(0.002)

(0.005)

Training

0.002

0.006

(0.001)

(0.003)

Union

0.090

0.246

(0 030)

(0 089)

Illness

-0.076

-0.226

(0.038)

(0.107)

Age (linear)

-0.003

-0.016

(0 002)

(0 005)

"Figures in parentheses are standard errors.

Summary

1. In this chapter we discussed

(a) Dummy explanatory variables.

(b) Dummy dependent variables.

(c) Truncated dependent variables.

2. Dummy explanatory variables can be used in tests for coefficient stabihty in the linear regression models, for obtaining predictions in the Unear regression models, and for imposing cross-equation constraints. These uses have been illustrated with examples.

3. One should exercise caution in using the dummy variables when there is heteroskedasticity or autocorrelation. In the presence of autocorrelated errors, the dummy variables for testing stability have to be defined suitably. The proper definitions are given in Section 8.6.

4. Regarding the dummy dependent variables, there are three different models that one can use: the linear probability model, the logit model, and the probit model. The linear discriminant function is closely related to the linear probability model. The coefficients of the discriminant function are just proportional to those of the linear probability model (see Section 8.8). Thus there is nothing new in linear discriminant analysis. The linear probability model has the drawback that the predicted values can be outside the permissible interval (0, 1).

5. In the analysis of models with dummy dependent variables, we assume

Table 8.6 Earnings Equations Estimated from the New Jersey Negative-Income-Tax Experiment"



Exercises

1. Explain the meaning of each of the following terms.

(a) Seasonal dummy variables.

(b) Dummy dependent variables.

(c) Linear probability model.

(d) Linear discriminant function.

(e) Logit model.

(f) Probit model.

(g) Tobit model.

(h) Truncated regression model.

the existence of a latent (unobserved) continuous variable which is specified as the usual regression model. However, the latent variable can be observed only as a dichotomous variable. The difference between the logit and probit models is in the assumptions made about the error term. If the error term has a logistic distribution, we have the logit model. If it has a normal distribution, we have the probit model. From the practical point of view, there is not much to choose between the two. The results are usually very similar. If both the models are computed, one should make some adjustments in the coefficients to make them comparable. These adjustments have been outlined in Section 8.9.

6. For comparing the linear probability, logit, and probit models, one can look at the number of cases correctly predicted. However, this is not enough. It is better to look at some measures of Rs. In Section 8.9 we discuss several measures of R: squared correlation between and y, Effrons R, Cragg and Uhlers R, and McFaddens For practical purposes the first two are descriptive enough. The computation of the different Rs is illustrated with an example in Section 8.10.

7. The tobit model is a censored regression model. Observations on the latent variable y* are missing (or censored) if y* is below (or above) a certain threshold level. This model has been used in a large number of applications where the dependent variable is observed to be zero for some individuals in the sample (automobile expenditures, medical expenditures, hours worked, wages, etc.). However, on careful scrutiny we find that the censored regression model (tobit model) is inappropriate for the analysis of these problems. The tobit model is, strictly speaking, applicable in only those situations where the latent variable can, in principle, take negative values, but these negative values are not observed because of censoring. Where the zero observations are a consequence of individual decisions, these decisions should be modeled appropriately and the tobit model should not be used mechanically.

8. Sometimes samples are drawn from truncated distributions. In this case the truncated regression model should be used. This model is different from the censored regression model (tobit model).

9. The LIMDEP program can be used to compute the logit, probit, tobit, truncated regression, and related models discussed here.



£>! = seasonal dummy =

1 for first quarter 0 otherwise

2. What would be your answer to the following queries?

(a) My regression program refuses to estimate four seasonal coefficients when 1 enter the quarterly data including a zero-one dummy for each quarter. What am I supposed to do?

(b) I estimated a model with a zero-one dependent variable using the logit and probit programs. The coefficients I got from the probit program were all smaller than the corresponding coefficients estimated by the logit program. Is there something wrong with my programs?

(c) I have data on medical expenditures on a sample of individuals. Some of them, who did not have any ailments, or did not bother to go to the doctor even if they had ailments, had no expenditures. I wish to estimate the income elasticity of medical expenditures. I am thinking of dropping the individuals with zero expenditures and estimating the model by OLS. My friend says that I would be overestimating the income elasticity by doing this. Is she correct?

3. Explain how you would use dummy variables for generating predictions from a regression model.

4. In the model

Y, = p,x„ + , + , + ,

the coefficients are known to be related to a more basic economic parameter a according to the equations

, + p2 = a . + = -a

Explain how you would estimate a and the variance of a.

5. In the model

y„ = ox,, + pX2, + M, Y2, = 0X2, + U21 Y3, = Pi/ + W3,

where „ ~ IN(0, 2o-), M2, ~ IN(0, cr), Uj, ~ IN(0, a), and ,„ Mj,, , are mutually independent, explain how you will estimate a, p, and a.

6. The following equation was estimated to explain a short-term interest rate: (Figures in parentheses are standard errors.)

Y, = 5.5 + 0.93X, - 0.38x, , - 5.2(P,IP, ,) + 0.50F, ,

(1 3) (0 04) (0 09) (1 3) (0 07)

- 0.05(£), - A) + 0.08(£)2 - A) + 0.06(£) - A)

(0 04) (0 04) (0 04)

R = 0.90 R} = 0.89 SEE = 0.19 DW =1.3 = 92

where Y = interest rate on 4 to 6-month commercial paper (percent) X = interest rate on 90-day Treasury bills (percent)



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [ 118 ] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]