back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [ 94 ] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]


94

If the shaded area is a then the cut-off point is

-VaR

Figure 9.4 Value-at-risk from a simulated P&L density.

Table 9.3: Problems of using a long historic data period for historical VaR

Problem

Possible solution

Data on all relevant assets and risk factors may not be available.

Full valuation using binomial trees or time-consuming numerical methods on thousands of data points may preclude the use of this method for real-time VaR calculations.

Very long historic data periods may contain a number of extreme market events from far in the past that are not necessarily relevant to current normal circumstances.

The underlying assets have been trending during the historic period and this does not necessarily reflect their future performance.16

Simulate data using a method such as principal component analysis (§6.4.2).

Analytic approximations may be necessary, but this introduces an additional source of error in the VaR measure.

Filter out the extreme events from the historic data to obtain the VaR measure under normal market circumstances that is used to calculate the MRR (but see the warning below).

Mirror each upward/downward move in the data set with a downward/upward move of the same magnitude, thus doubling the size of the data set and removing any bias from trends.

The historical VaR

spreadsheet on the CD illustrates how to implement the model

computed and applied to the historic series on the underlying assets and risk factors (§9.4.3).

The empirical / -day P&L density is obtained by building a histogram of the h-day differences AP, = Pl+h - P, for all t. Then the historical VaRa h is the lower lOOath percentile of this distribution, as shown in Figure 9.4. It will be sensitive to the historic period chosen. For example, if only 300 observations are used in this empirical density, the lower 1 % tail will consist of only the three largest losses. The VaR will be the smallest of these three largest losses, and the

l6However note that the trend over 10 days will be much smaller than the variations, so any bias introduced by trending data will be very small.



conditional VaR will be the average of these three losses. Clearly neither will be very robust to changes in the data period.

From the point of view of robustness it is obviously desirable to use a very long historic data period. But this also has a number of disadvantages, which are summarized, along with possible solutions, in Table 9.3. Also, since the portfolio composition has been determined by current circumstances, how meaningful is it to evaluate the portfolio P&L using market data from far in the past? Long ago the availability, liquidity, risk and return characteristics of the various products in the portfolio will have been quite different; the current portfolio would never have been held, so what is the point in valuing it, under these circumstances?

A word of caution on the use of exceptional events that have occurred in historic data for normal circumstances VaR and for stress circumstances VaR. If such events are filtered out, as suggested above, one has to be very careful not to throw out the relevant part of the data. For VaR models, the relevant part is the exceptional losses. If the exceptional event occurred far in the past and the view is taken that similar circumstances are unlikely to pertain during the next 10 days, then there should be no objection to removing them from the historic data set that is used to compute the normal circumstances VaR that forms the basis of the MRR. These events can always be substituted into the data set for the historical scenarios used to calculated stress VaR measures, for setting trading limits and so forth, possibly using the statistical bootstrap so that they occur at random times and with random frequencies in a simulation of historic data. However, if the exceptional events occurred more recently then it is not acceptable to filter them out of the data for MRR computations, setting trading limits or any other VaR application.

As an example of recent exceptional losses that should be used in normal VaR computations, consider the daily data from 2 January 1996 to 2 October 2000 on 20 US stocks that were analysed in Table 4.1. The data showed seven extremely large negative returns among 20 stocks over about 1200 days. Over a VaR period of 10 days one should conclude that the chance of an exceptional event per stock is about (7 x 10)/(20 x 1200) or 0.29%. Therefore if all 20 stocks are taken together, the probability of an exceptional event in the portfolio over the next 10 days will be more than 5%. One could conclude that in a portfolio of 20 US stocks an exceptional return is quite normal.

9.4.2 Monte Carlo Simulation

Instead of using actual historic data to build an empirical P&L distribution, it may be more convenient to simulate the movements in underlying assets and risk factors from now until some future point in time (the risk horizon of the model). Taking their current values as starting points, thousands of possible values of the underlying assets and risk factors over the next h days are

If such events are filtered out, one has to be very careful not to throw out the relevant part of the data. For VaR models, the relevant part is the exceptional losses

The Monte Carlo VaR

spreadsheet is limited to 3000 simulations



Unit Cube

/(0,1) Distribution

Figure 9.5 Sampling the hypercube and simulating independent 7V(0, 1) observations.

generated using Monte Carlo methods. This very large set of scenarios is then used to obtain thousands of possible values for the portfolio in h days time, and a histogram of the differences between these and the current portfolio value is obtained. As with the historic simulation method, the VaR measure is simply the lower percentile of this distribution.

It is necessary to generate these scenarios in a realistic manner. One should not include, for example, a scenario where the 2-year swap rate increases by 20 basis points at the same time as the 3-year swap rate decreases by 50 basis points. Not only will the volatility of an asset determine its possible future values in h days time, one also needs to take account of the correlations between different assets in the portfolio. For this reason one usually employs an / -day covariance matrix for all the underlying assets and risk factors in the portfolio.

Monte Carlo VaR calculation can be summarized in three stages, which are now explained using the example of a portfolio that has correlated risk factors. We denote the / -day returns to these risk factors by ., Rk and

their covariance matrix by V.

1. Take a random sample on independent standard normal variates. First a random point in the / -dimensional unit cube is chosen, as this corresponds to a set of independent random numbers, each between 0 and 1. The case = 3 is illustrated in Figure 9.5. It shows how each random number xt is obtained from a point in the cube and then used with the probability integral transform to obtain a random sample on the standard normal distribution. Denote this x 1 vector of independent random samples from /V(0, 1) by z.17

2. Use the covariance matrix to transform this sample into correlated h-day returns. Obtain the Cholesky decomposition of V, as explained in §7.1.2, to convert the random sample z to a vector of normal returns r that reflects the appropriate covariance structure.

17 It is possible to use other forms of distribution, analytic or empirical, but the normal distribution is the usual one.



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [ 94 ] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]