back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [ 114 ] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]


114

yt = c + et + $e, u (11.20)

where e, ~ i.i.d.(0, a2). This model is a stationary representation for any values of or P, since E(yt) = c,

,) = (1 + 2) 2 (11.21)

cov(y„ y, s) - Pa2, if s - 1 and 0 otherwise. (11.22)

Higher-order moving average models have quite simple properties and, unlike AR(/>), they always have a stationary representation. In the moving average model of order q, MA(g), given by

7, = c + £( + Pie, i + . . . +P9e, 9, a straightforward calculation gives E(yt) - c,

v(yt) = (l 4- Pi 4- Pi + ... p2)a2

cov(7(, y, s) = ( , + , + P2Pi+2 + . . . + P? ,P?)a2, if s < q and 0 otherwise.

We have shown that AR models are only stable under certain unit root conditions but that they are always invertible to be represented by an infinite moving average. The opposite applies with MA models: they are always stable but are only invertible to an AR representation under certain conditions on the roots of a polynomial being outside the unit circle.

The invertibility The invertibility conditions for an MA process are similar to the stationarity

conditions for an MA conditions for an AR process. For example, it is possible to write the

process are similar to MA(1) model using the lag operator as yt = + (1 + pz.)e„ or equivalently

the stationarity (1 + f,L)~x(y, - c) = £,. So the MA(1) is invertible into an equivalent

conditions for an AR representation as a stable AR(oo) model, process

, = c/(l + P) + Pjv, - P2JV2 + PV, 3 - . . . + e„

only if IPI < 1. For the general MA(q) process to be invertible to an infinite-order AR model the roots of the polynomial 1 - P;x - P2x2 - ... \\qxq must lie outside the unit circle.

11.2.3 ARMA Models

The most general model for a stationary process is an autoregressive moving average model with p autoregressive terms and q moving average terms. This is the ARMA(/>, q) model given by



, = + , + 2 , 2 + . . . + apyt p + e, + B,e, , + . . . + $„ , ,

(11.23)

where e, ~ i.i.d.(0, a2). This is always invertible into an MA(oo) but is only a stationary representation if the roots of 1 - - a2x2 - ... - apxp lie outside the unit circle. It is invertible into an AR(oo) model if the roots of 1 - BjX - B2x2 - ... - 3?x? lie outside the unit circle.9

11.3 Model Identification

The first objective of stationary time series analysis is to identify the appropriate lags p and q for representing the data by an ARMA model. One obvious method is to compare the empirical correlogram of the data with the known autocorrelation functions for MA models and low-order AR models that are described in §11.3.1. If sample sizes are large, as they often are in financial market data modelling, the errors in the correlogram will be small. A standard test for the significance of /jth-order autocorrelation in a sample size is described in §11.3.2.

Although a simple visual inspection of the correlogram may sometimes lead to the conclusion that the series exhibits autocorrelation patterns that may be modelled by a simple AR or MA model, it is not always easy to identify the appropriate model from the correlogram. For example, the AR(2) autocorrelation function looks like a damped sine wave, but so does that of an ARMA( 1,1) model, and even higher-order AR processes have autocorrelation functions that are quite difficult to identify. The last part of this section explains how to identify a time series model by testing down its specification from a high-order ARMA process.

11.3.1 Correlograms

The rth-order autocorrelation coefficient for a stationary time series {y,} is

?s = cov(y„yt s)/V(yt). (11.24)

When attempting to identify the appropriate model for a stationary series it is convenient to represent the autocorrelations at different lags s = 0, 1, 2,... in a chart, which is called the correlogram. This is an estimate of the autocorrelation function based on empirical data. Obviously p0 = 1 for any stationary series, and for a simple white noise process the autocorrelation function is 1 at lag zero and zero elsewhere.

Note that if these two polynomials have a common root the ARMA model will be overparameterized and this will cause problems for model identification (Harvey, 1993).

For a simple white noise process the

autocorrelation function is 1 at lag zero and zero elsewhere



The autocorrelations of an AR( 1) model decline geometrically as the lag increases and the signs will be oscillating if there is negative autocorrelation

1 -0.75-0.5-0.25-

I I

MA(1)(P

= 0.8)

MA(5)

AR(1)(a

= 0.9)

□ AR(1) (a

= -0.6)

-0.25 --0.5:

0.75 ------- ------------------------------------------

Figure 11.5 Autocorrelations in MA(</) and AR(1) models

Figure 11.5 shows the correlograms of some simple AR and MA models. The autocorrelation functions of MA processes have a very simple shape, being non-zero only at lags less than or equal to the order of the MA representation. To see this for the MA(1) model, using (11.21) and (11.22) it is clear that the autocorrelations of an MA(1) model take the form

ps = B/(l + P2) for s = 1 and 0 otherwise, (11.25)

so they cut off after lag 1. An MA(2) process has autocorrelation function

P,(l + P2)

Pi =

p2 =

(1 + Pl+pl) P2

1 + P? + P2

p2 = 0 for s > 2.

More generally, the autocorrelation functions for an MA(q) process are zero for all lags greater than q.

It follows from (11.14) and (11.15) that the rth-order autocorrelation coefficient in the AR(1) process is a". So the autocorrelations of an AR(1) model decline geometrically as the lag increases and the signs will be oscillating if there is negative autocorrelation (a < 0). The autocorrelation function of an AR(/>) process with p > 1 is more complex. Dividing (11.19) by (11.18) gives the Yule-Walker equations for the AR(/>) autocorrelations:

Ps = aiP,-i +Ct2Ps-2

for s = 1, 2, 3,

(11.26)

The solution of these is in terms of the sth powers of the eigenvalues of the

characteristic equation xp -

a2x>>

- ap = 0 (Hamilton, 1994).



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [ 114 ] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]