back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [ 111 ] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]


111

To see that taking deviations from trend of a difference-stationary process does not make it stationary, suppose the data are generated by (11.8). Repeated substitution for InP,, lnP, 2, . . . gives an alternative form of (11.8):

where c0 = lnP0. The term ~Ei=l e, , is clearly non-stationary because it has a variance that increases with t.

Thus if a trend line is fitted to random walk data, or to more general data that are generated by an integrated process, the deviations from the trend will not be stationary. The appropriate way to obtain a stationary series from random walk (or integrated) data is to take first differences.

If the data were generated by a trend-stationary process (11.9) then the correct stationarity transform is to fit a trend and take deviations. In fact, if first differences were taken instead, the resulting series would have severe negative autocorrelation. This is easy to see, since if the true process were (11.9) then

lnP, -lnP( , = (c + pV + et) - (c + \i(t - 1)+ e, ,) = B + e, - 6f ,,

so first differences follow an MA(1) model with first-order autocorrelation coefficient -0.5 (§11.2.1).

Figure 11.3 compares the two methods of detrending data, taking deviations from a time trend and first differencing. The year 1995 for the HSBC stock price shown in Figure 11.1a was chosen as a year with a significant trend - indeed, fitting a time trend by ordinary least squares to the price data shown in Figure 11.3a gives the model:

Then Figure 11.3b compares the deviations from this fitted trend and the first differences of the price data. It is clear, just by looking at the data, that the deviations from trend are not stationary but that the first differences are, and we shall apply a simple test to confirm this in §11.1.5.

One cannot apply the usual diagnostics such as correlation or volatility analysis, information ratios and the like to deviation from trend data. They would mean nothing, because the data are not stationary. In the model above the -statistics on the coefficients (shown in brackets) are enormous and the R2 from the regression is 0.93. But whenever the dependent variable in a regression is non-stationary the standard diagnostics do not apply. When data have trends, deterministic or stochastic, the R2 will always be close to 1 and the t-statistics are usually very biased.

When the dependent variable in a regression is non-stationary the standard diagnostics do not apply

HSBC = -3978 +0.1169?.

(-58) (59.6)



It is very important to apply the right sort of stationarity transform to the data, if they are not already stationary. It is hardly ever appropriate to detrend the data by fitting a trend line and taking deviations

Jan-95 Mar-95 May-95 Jul-95 Sep-95 Nov-95

-10.00

-HSBC price -Fitted trend I

Jan-95 Mar-95 May-95 Jul-95 Sep-95 Nov-95

Deviations from trend

- First differences

Figure 11.3 (a) HSBC price and fitted trend; (b) two types of detrending.

In summary, it is very important to apply the right sort of stationarity transform to the data, if they are not already stationary. Most financial markets generate price, rate or yield data that are non-stationary because of a stochastic rather than a deterministic trend. It is hardly ever appropriate to detrend the data by fitting a trend line and taking deviations. Instead the data should be detrended by taking first differences, usually of the log prices or rates, because then the transformed stationary data will correspond to market returns.

11.1.5 Unit Root Tests

Statistical tests of the null hypothesis that a time series is non-stationary against the alternative that it is stationary are called unit root tests. The name derives from the fact that an ARMA process is non-stationary if the characteristic polynomial has a root that does not lie inside the unit circle (§11.2.1).

For example, an AR(1) model (11.2) is non-stationary if a=l. The characteristic polynomial of the AR(1) process is 1 - ax so it has root \/a



which will lie on, not inside, the unit circle when a - 1. When a - 1 the AR(1) becomes the random walk model,

, = c + j, , + e„

where 8, ~ i.i.d.(0, a2). To test that a = 1 it is not sufficient to estimate a and then use a simple -test (§A.2.2) since these are severely biased in the case of a unit root. However, using the first difference operator, an AR(1) model can be rewritten as

Ay, = c + (a- l)y, x + ,. (11.10)

This representation of the AR(1) model gives a clue as to how the null hypothesis a - 1 may be approached: one simply performs a regression of Ay, on a constant and y, x and then tests whether the coefficient of y, x is significantly different from zero. Note that the test should be one-sided (§A.2.1) because the alternative hypothesis that the process is stationary is a < 1, that is, that the lag coefficient in (11.10) is less than zero.

This type of test is called a Dickey-Fuller (DF) test. Dickey and Fuller (1979) showed that standard -ratios based on (11.10) are biased and that the appropriate critical values have to be increased by an amount that depends on the sample size. For example, for a sample size of 250 the 5% critical value of the DF distribution is -2.88 and the 1% critical value is -3.46.

Let us apply this DF test for a unit root to the data shown in Figure 11.3b, to confirm that the deviations from trend are non-stationary and the first differences are stationary. First, let y, be the first differences shown by the solid line. Take first differences of y, and regress it on a constant and a lag, y, {. The result is the model

Ay, = 0.122 - 0.942y, x.

(1.71) (-15.12)

The DF statistic is the -ratio on the lag coefficient, picked out in bold, and it far exceeds the 1% critical value of -3.46. So the null hypothesis that the data are non-stationary can be rejected in favour of the alternative hypothesis that they are stationary. However, when one applies the same test to the deviations from trend data (the dotted line) the DF regression is

Ay, = -0.03 -0.066y, x,

(-0.47) (-2.11)

and the DF statistic is too small to reject the null hypothesis. However this DF test is rather simple: more appropriate tests for trends are given in §11.1.6.

The standard DF test is now applied to test whether log prices are non-stationary: the DF regression is just an OLS regression of returns on the lag of log prices. For example, applying DF tests to some equity indices using daily log index prices from 16 October 1997 to 11 September 1998 gives the results in

To test whether log prices are non-stationary, the DF regression is just an OLS regression of returns on the lag of log prices



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [ 111 ] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]