back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [ 89 ] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


89

which has (asymptotically) a x-distribution with 1 degree of freedom.

In the particular case we are considering, note that = 2. ~ Pi> ft = 1.

However, there are some problems with the Wald test. The restriction (6.27) can as well be written as

/,0) = . + = 0 (6.28)

) = P2 + ! = 0 (6.28)

If we write it as (6.28), we have

ft = 1 ft = ft =

and if we write it as (6.28) we have

Pi P2

, 1

ft = ft = 1 ft =

Although, asymptotically, it should not matter how the Wald test is constructed, in practice it has been found that the results differ dependmg on how

The Wald Test

Define

/(p) = p,p2 + Using a first-order Taylor series expansion, we get

/( ) = /0) + i , - P,) 1=1

where g, = dfld,. Under the null hypothesis/0) = 0 and

var[f(0)] = J.ggAj = (say) \ > J I

since

( „ ,) = cfiC,

The Wald test statistic is obtained by substituting g, for , and for cr in . Denoting the resulting expression by , we get the statistic



-21ogA =

which is significant at the 1% level. Thus the hypothesis of a first-order autocorrelation is rejected. Although the DW statistic is significant, this does not mean that the errors are AR(1).

"A. W. Gregory and M. R. Veall, "On Formulating Wald Tests of Non-linear Restrictions," Econometrica, November 1985. The authors confirm by Monte Carlo studies and an empirical example that these differences can be substantial. See also A. W. Gregory and M. R. Veall, "Wald Tests of Common Factor Restrictions," Economics Letters, Vol. 22, 1986, pp. 203-208.

we formulate the restrictions." However, formulations (6.28) and (6.28) implicitly assume that P2 0 or p, 7 0, respectively, and thus in this case it is more meaningful to use the restriction in the form (6.27) rather than (6.28) or (6.28).

Note that a hypothesis like ,/, = can be transformed into a linear hypothesis Pi - CP2 = 0. Similarly, p,/(l - P2) = can be transformed to p, + CP2 - = 0. On the other hand, if, for some reason, an exact confidence interval was also needed for Pj/p,, we can use Feillers method described in Section 3.10. Noting the relationship between confidence intervals and tests of hypotheses, one can construct a test for the hypothesis P/Pj = C.

Illustrative Example

Consider the data in Table 3.11 and the estimation of the production function (4.24). In Section 6.4 we presented estimates of the equation assuming that the errors are AR(1). This was based on a DW test statistic of 0.86. Suppose that we estimate an equation of the form (6.26). The results are as follows (all variables in logs; figures in parentheses are standard errors):

X, = -2.254 -I- 0.884L, -I- 0.710A:, + 0.489 , ,

(0.530) (0.139) (0.152) (0.120)

- 0.073L, , - 0.541A:, , RSSo = 0.01718

(0.252) (0.150)

Under the assumption that the errors are AR(1), the residual sum of squares, obtained from the Hildreth-Lu procedure we used in Section 6.4 is: RSS, = 0.02635.

Since we have two slope coefficients, we have two restrictions of the form (6.27). Note that for the general dynamic model we are estimating six parameters (a and five Ps). For the serial correlation model we are estimating four parameters (a, two ps, and p). We will use the likelihood ratio test (LR), which is based on (see the appendix to Chapter 3)

" Uss,/

and -2 log> has a x-distribution with d.f. 2 (number of restrictions). In our example



*6.I0 Trends and Random Wall

Throughout our discussion we have assumed that E(u,) = 0 and var(m,) = for ail t, and cov(m„ m,.) = a-p for all t and k, where p. is serial correlation of lag (this is simply a function of the lag and does not depend on t). If these assumptions are satisfied, the series u, is called covariance stationary (covariances are constant over time) or just stationary. Many economic time series are clearly nonstationary in the sense that the mean and variance depend on time, and they tend to depart ever further from any given value as time goes on. If this movement is predominantly in one direction (up or down) we say that the series exhibits a trend. More detailed discussion of the topics covered briefly here can be found in Chapter 14.

Nonstationary time series are frequently de-trended before further analysis is done. There are two procedures used for de-trending.

1. Estimating regressions on time.

2. Successive differencing.

In the regression approach it is assumed that the series y, is generated by the mechanism

y, = /(0 + u,

where/(/) is the trend and u, is a stationary series with mean zero and variance al. Let us suppose that f{t) is linear so that we have

y, = a + pi + M, (6.29)

Note that the trend-eliminated series is „ the least squares residuals that satisfy the relationship , = 0 and 2) tu, = 0. If differencing is used to eliminate the trend we get , = , - y, i = + u, - m, . We have to take a first difference again to eliminate p and we get , = Au, = u, - 2 , , -I- ,„2 as the de-trended series. On the other hand, suppose we assume that y, is generated by the model

y, - y,-i = P + e, (6.30)

where e, is a stationary series with mean zero and variance a". In this case the first difference of y, is stationary with mean p. This model is also known as the random-walk model. Accumulating y, starting with an initial value we get from (6.30)

, = + ?>t + 2 (6.31)



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [ 89 ] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]