back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [ 73 ] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]


73

where w* - - ,- and the error term in (7.14) picks up the approximation from using only the first m of the principal components.

One of the attractions of using principal component factor models is that they have a particularly simple risk structure. Since principal components are orthogonal, their unconditional covariance matrix is just the diagonal matrix of their variances. Since the principal components are also orthogonal to the error, taking variances of (7.14) gives the covariance matrix of Y = (y,, . . ., y„) as:

V = ADA + Ve, (7.15)

where A = ( >*) is the matrix of denormalized factor weights, D = diag(F(pt), • • > V(pm)) is the diagonal matrix of variances of principal components and V£ is the covariance matrix of the errors. Ignoring Ve in (7.15) gives the approximation

V = ADA (7.16)

with an accuracy that is controlled by choosing more or less components to represent the system.

Note that V will always be positive semi-definite, and one might indeed be content with this. However, V may not be strictly positive definite unless m = k2{ The approximation (7.16) will normally produce a strict positive definite covariance matrix when the representation (7.13) is made with enough principal components to give a reasonable degree of accuracy. Nevertheless when covariance matrices are based on (7.16) with m<k, they should be run through an eigenvalue check to ensure strict positive definiteness.

The first advantage of using this type of orthogonal transformation to generate risk factor covariance matrices is now clear. There is a very high degree of computational efficiency in calculating only m variances instead of the k(k+ l)/2 variances and covariances of the original system, and typically m will be much less than k. For example, in a single yield curve with, say, 15 maturities, only the variances of the first two or three principal components need to be computed, instead of the 120 variances and covariances of the yields for 15 different maturities.

7.4.2 Orthogonal EWMA

Exponentially weighted moving averages of the squares and cross products of returns are a standard method for generating covariance matrices. But a

21 Although D is positive definite because it is a diagonal matrix with positive elements, there is nothing to guarantee that ADA will be positive definite when m < A. To see this write xADAx = yDy, where Ax = y. Since can be zero for some non-zero \. xADAx will not be strictly positive for all non-zero x. It may be zero, and in that case ADA will only be positive semi-definite.

Since principal components are orthogonal, their unconditional covariance matrix is just the diagonal matrix of their variances

There is a very high degree of computational efficiency in calculating only m variances instead of the kik + l)/2 variances and covariances of the original system



limitation of this type of direct application of EWMAs is that the covariance matrix is only guaranteed to be positive semi-definite if the same smoothing constant is used for all the data. That is, the reaction of volatility to market events and the persistence in volatility must be assumed to be the same in all the assets or risk factors that are represented in the covariance matrix.

A major advantage of the orthogonal factor method described here is that it allows EWMA methods to be used without this unrealistic constraint. Each principal component EWMA variance in D can be calculated with a different smoothing constant and the matrix V given by ADA will still be positive semi-definite.

The net effect will be that the degree of smoothing in the variance of any particular asset or risk factor will depend on the factor weights in the principal components representation. These factor weights are determined by the correlation with other variables in the system, so the degree of smoothing on any variable is determined by its correlation with other variables in the system. Put another way, in orthogonal EWMA the market reaction and volatility persistence of a given asset will not be the same as the other assets in the system, instead it will be related to its correlation with the other assets. Even if the EWMA variances of the principal components all have the same smoothing constant the transformation of these variances using the factor weights will induce different decay rates for the variances and covariances of the variables in the original system.22

In orthogonal EWMA the market reaction and volatility persistence of a given asset will not be the same as the other assets in the system, instead it will be related to its correlation with the other assets

Figure 7.8 uses the daily data on the same three French stocks that were used in the example of §6.4.2. Figure 7.8 compares the volatilities and correlations that are obtained using the orthogonal EWMA method with the volatilities and correlations that are obtained using EWMAs directly on the squared

returns

Comparative plots such as these are a crucial part of the orthogonal model

calibration. If these volatilities and correlations are not similar it will be Comparative plots such

because (a) the data period used for the PCA is too long, or (b) there are as these are a crucial

variables included in the system that are distorting the volatilities and part of the orthogonal

correlations of other variables computed using the orthogonal method. Both m°del calibration

these problems may be encountered if there is insufficient correlation in the

system for the method to be properly applied. If one or more of the variables

has a low degree of correlation with the other variables over the data period,

the factor weights in the PCA will lack robustness over time. The model could

- Choosing identical smoothing constants for all principal components is in fact neither necessary for positive definiteness nor desirable for optimal forecasting. The optimal smoothing constants may be lower for the higher, less important principal components, whereas the volatility of the first component may be the most persistent of the principal component volatilities in a highly correlated system because the first component picks up the common trend.

- The smoothing constant X has been set arbitrarily as 0.95 for all EWMAs in this section.



Figure 7.8 Comparison of direct and orthogonal EWMA on CAC stocks.

be improved by using a shorter data period, and/or omitting the less correlated variables from the system.

Figure 7.8 shows that the orthogonal EWMA method replicates the direct EWMA method well, even though the PCA on these three stocks is not very informative because their correlation is not that high. One might expect that the orthogonal EWMA method would be even closer to direct EWMA in systems that are highly correlated, such as yield curves and other term structures.

Having explained the method with a simple 3 stock example, let us now see its real strength by applying it to a larger and highly correlated system. Figure 7.9



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [ 73 ] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166]