back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [ 44 ] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]


44

wider the confidence interval, the more uncertainty there is in the forecast.2 Standard errors and confidence intervals for some standard forecasting models are described in §5.2.

Having quantified the degree of uncertainty in a forecast, one should make an adjustment to the mark-to-market value of an option portfolio when some options have to be marked to model. The scale of this adjustment will of course depend on the size of the standard error of the volatility forecast, and in §5.3.1 it is shown that the model price of out-of-the-money options should normally be increased to account for uncertainty in volatility. Section 5.3.2 shows how uncertainty in volatility is carried through to uncertainty in the value of a dynamically delta hedged portfolio. It answers the question: how much does it matter if the implied volatility that is used for hedging is not an accurate representation of the volatility of the underlying process?

5.1 Evaluating the Accuracy of Point Forecasts

How can it be that so many different results are obtained when attempting to forecast volatility and correlation using the same basic data? Unlike prices, volatility and correlation are unobservable. They are parameters of the data generation processes that govern returns. Volatility is a measure of the dispersion of a return distribution. It does not affect the shape of the distribution but it still governs how much of the weight in the distribution is around the centre, and at the same time how much weight is around the extreme values of returns. Small volatilities give more weight around the centre than large volatilities, so it may be that some volatility models give better forecasts of the central values, while other volatility models give better forecasts of the extreme values of returns.

In financial markets the volatility of return distributions can change considerably over time, but there is only one point against which to measure the success of a fixed horizon forecast: the observed return over that horizon. The results of a forecast evaluation will therefore depend on the data period chosen for the assessment. Furthermore, the assessment of forecasting accuracy will depend very much on the method of evaluation employed (Diebold and Mariano, 1995). Although we may come across statements such as We employ fractionally integrated EWMA volatilities because they are more accurate, it is unlikely that a given forecasting model would be more accurate according to all possible statistical and operational evaluation criteria. A forecasting model may perform well according to some evaluation criterion but not so well according to others. In short, no definitive answer can ever be given to the question which method is more accurate?.

2Classical statistics gives the expected value of the estimator (point estimate) and the width of the distribution of the estimator (confidence interval) given some true value of the underlying parameter. It is a good approximation for the distribution of the true underlying parameter only when the statistical information (the sample likelihood 1 is overwhelming compared to ones prior beliefs. This is not necessarily so for volatility forecasts, especially for the long term (§8.3.3).

The assessment of forecasting accuracy will depend very much on the method of evaluation employed



0-1-,-,-,-,-,-,-,-

May-88 May-89 May-90 May-91 May-92 May-93 May-94 May-95

-HIST30 -----REAL30

Figure 5.1 Historic and realized volatility of the German mark-US dollar exchange rate.

A realization of a constant volatility process is just a lag of historical volatility, and trying to predict the lag of a time series by its current value will not usually give good results!

Much research has been published on the accuracy of different volatility forecasts for financial markets: see, for example, Andersen and Bollerslev (1998), Alexander and Leigh (1997), Brailsford and Faff (1996), Cumby et al. (1993), Dimson and Marsh (1990), Figlewski (1997), Frennberg and Hansson (1996) and West and Cho (1995). Given the remarks just made about the difficulties of this task it should come as no surprise that the results are inconclusive. However, there is one finding that seems to be common to much of this research, and that is that historic volatility is just about the worst predictor of a constant volatility process. Considering Figure 5.1, this is really not surprising.3 A realization of a constant volatility process is just a lag of historic volatility,4 and trying to predict the lag of a time series by its current value will not usually give good results!

Some operational and statistical criteria for evaluating the success of a volatility and/or correlation forecast are described below. Whatever criterion is used to validate the model it should be emphasized that, however well a model fits in-sample (i.e. within the data period used to estimate the model parameters), the real test of its forecasting power is in out-of-sample, and usually post-sample, predictive tests. As explained in §A.5.2, a certain amount

3Let t - T be the forecast horizon and t = 0 the point at which the forecast is made. Suppose an exceptional return occurs at time - 1. The realized volatility of the constant volatility process will already reflect this exceptional return at time t = 0; it jumps up T periods before the event. However, the historical volatility only reflects this exceptional return at time T; it jumps up at the same time as the realized volatility jumps down.

4This is the case if the historical method uses an equally weighted average of past squared returns over a lookback period that is the same length as the forecast horizon. More commonly, historical volatilities over a very long-term average are used to forecast for a shorter horizon - for example, 5-year averages are used to forecast 1-year average volatilities.



of the historic data should be withheld from the period used to estimate the model, so that the forecasts may be evaluated by comparing them to the out-of-sample data.

5.1.1 Statistical Criteria

Suppose a volatility forecasting model produces a set of post-sample forward volatility predictions, denoted a,+1, . . ., ol+T. Assume, just to make the exposition easier, that these forecasts are of 1-day volatilities, so the forecasts are of the 1-day volatility tomorrow, and the 1-day forward volatility on the next day, and so on until T days ahead. We might equally well have assumed the forecasts were of 1-month volatility over the next month, 1 month ahead, 2 months ahead, and so on until the 1-month forward volatility in T months time. Or we might be concerned with intra-day frequencies, such as volatility over the next hour. The unit of time does not matter for the description of the tests. All that matters is that these forecasts be compared with observations on market returns of the same frequency.5

A process volatility is never observed; even ex post we can only ever know an estimate, the realization of the process volatility that actually occurred. The only observation is on the market return. A 1-day volatility forecast is the standard deviation of the 1-day return, so a 1-day forecast should be compared with the relevant 1-day return. One common statistical measure of accuracy for a volatility forecast is the likelihood of the return, given the volatility forecast. That is, the value of the probability density at that point, as explained in §A.6.1. Figure 5.2 shows that the observed return r has a higher likelihood under f{x) than under g(x). That is, r is more likely under the density that is generated by the volatility forecast that is the higher of the two. One can conclude that the higher volatility forecast was more accurate on the day that the return r was observed.

Suppose that we want to compare the accuracy of two different volatility forecasting models, A and B.6 Suppose model A generates a sequence of volatility forecasts, {a,+1, . . ., csl+T}A and model generates a sequence of volatility forecasts, {a(+b . . ., a(+7-}B. For model A, compare each forecast a,+j with the observed return on that day, rl+j, by recording the likelihood of the return as depicted in Figure 5.2. The out-of-sample likelihood of the whole sequence of forecasts is the product of all the individual likelihoods, and we can denote this LA. Similarly, we can calculate the likelihood of the sample given the forecasts made with model B, LB. If over several such post-sample predictive tests, model A consistently gives higher likelihoods than model B, we can say that model A performs better than model B.

5If implied volatility is being forecast, then the market implied volatility is the observed quantity that can be used to assess the accuracy of forecasts.

6These could be two EWMA models, but with different smoothing constants; or a 30-day and a 60-da> histonc model; or an EWMA and a GARCH; or two different types of GARCH; or a historic and an EWMA. etc

A process volatility is never observed; even ex post we can only ever know an estimate, the realization of the process volatility that actually occurred



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [ 44 ] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]