back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [ 186 ] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


186

13.7 = MEASURES IN TIME-SERIES MODELS

where RSS is, as before, the residual sum of squares from the model and RS5, is the residual sum of squares from the naive model-which in this case is the random walk with drift with seasonal dummies, that is.

Ay, = a,5, -f- • • • -H a5 + e,

where 5,, S2, • , are the seasonal dummies. For quarterly data = 4, and for monthly data = 12. For most time-series data this naive model fits well and any alternative model has to do better than this. A model for which Rj <0 should be discarded as useless. Again, we can adjust the numerator and denominator in Rl for degrees of freedom and define the resulting measuie as .

As an example, with the airline data mentioned earlier, R = 0.983 and RSS = 0.47232. But RSSo = 0.19405 (from the random walk with seasonal dummies). Hence/?i = 1 - 0.47232/0.19405 = - 1.416. This indicates that the model with R = 0.983 is indeed weak even though it has a high R.

As yet another example, consider the Nelson and Plosser data on per capita real GNP (PCRGNP) in Table 13.3 at the end of the chapter. In this case, 2 (Ay, - W,y is 1.057 and the Rl for the model y, = a + fit + E,is -3.13. This suggests that the model y, = a + fit + e, with R = 0.86 is inadequate.

Let us illustrate the procedure of fitting autoregressions of different order for these data. Figures 13.8 and 13.9 show the correlograms for the logarithm oi PCRGNP in levels as well as first differences. The correlograms suggest that the series are nonstationary in the levels form but stationary in the first differ-

"T I I 1 1 I 1 1 ™T I I I I 1 I I I f

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ae

Figure 13.8. Correlogram for the Nelson-Plosser data on real per capita GNP.



-0.3 H-I-I-I-\-I-I-I-I-\-I-I-I-I-i-I-I-I-I-I

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Figure 13.9. Correlogram for first differences of real per capita GNP.

ences. Also, the fluctuation in the correlogram suggests that AR models are appropriate. We therefore fitted autoregressive models AR(1), AR(2), AR(3), and so on, for Ay,. The appropriate order of the autoregression can be chosen on the basis of the partial autocorrelations. The partial autocorrelations of order 1, 2, 3 and so on, are defined as the last coefficients in the AR(1), AR(2), AR(3), and so on, regressions. It has been proved that each of the partial autocorrelations has a SE = 2/\/N. We can also compute the Rj, for these models as well as the model y, = a + t + e,. Table 13.1 shows the partial autocor*

Table 13.1

Partial

Autocorrelation

Akaikes FPE

0.3315

0.1097

0.0790

0.0041

-0.0802

0.1155

0.0523

0.0042

-0.1799

0.1466

0.0518

0.0043

-0.1101

0.1624

0.0335

0.0045

-0.0917

0.1911

0.0294

0.0046

0.0952

0.1996

-0.0006

0.0048

-0.0001

0.2112

-0.0289

0.0050

-0.1810

0.2472

-0.0265

0.0050

-0.1891

0.3045

0.0064

0.0049



EXERCISES 553

relations, Rl, Rl, and Akaikes FPE for Ay,. The partial autocorrelations, Rl, and FPEs all suggest that the appropriate order of autoregression is 1.

Summary

1. Time series can be classified as stationary and nonstationary. When we refer to stationary time series, we mean covariance stationarity.

2. Some models that are of common use to model stationary series are the autoregressive moving-average (ARMA) models. Conditions are given for the stability of the AR models and the invertibility of the MA models. If the MA model satisfies the invertibility condition, we can write it as an infinite autoregression.

3. AR models can be estimated by OLS, but estimation of MA models is somewhat complex. In practice, however, the MA part does not involve too many parameters. A grid-search procedure is described for the estimation of MA models.

4. After the estimation of the ARMA model, one has to apply tests for serial correlation to check that the errors are white noise. Tests for this are described. Also, different ARMA models can be compared using the AIC and BIC criteria. For comparing different AR models, Akaikes FPE criterion can also be used.

5. Another goodness-of-fit measure is R. However, time-series data usually have strong trends and seasonais and the Rs we get are often very high. It is difficult to judge the usefulness of a model by just looking at the high R. Some alternative measures have been discussed, and these should be used.

Exercises

1. Figure 13.1 shows the graph of the stationary series X, = 0.7A,„, -I- e, with e, ~ 1N(5, 1). To generate the series we need the starting value X. We took it as = E(X,). We find it from the equation E(X,) = 0.7 ( , ,) -I- £(e,) or = 0.7ji -I- 5. This gives = 5/0.3 ~ 16.67. Using the same Zo, graph the nonstationary series X, = 10 + t + O.?,., -I- e, for / = 1,2,..., 100. This is called a trend-stationary series.

2. Starting with the same initial value X graph the difference stationary series X, - X,, = 10 + e,.

3. Which of the following AR(2) processes are stable?

(a) X, = 0.9X, , - 0.2X, 2 + e,.

(b) X, = 0.8X, , + 0.4X,2 + e,.

(c) X, = 1.0, , - 0.8Z, 2 + e,.



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [ 186 ] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]