back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [ 59 ] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134]


59

arbitrary and depends on the model used. In the context of an efficient market with no arrival of information, Roll (1984) has assumed a similar bias.

Now, we are ready to compute the expectation of rf from Equation 5.20, using Equations 5.21 and 5.22 and the independence of r* and e,,

= E(V2) = E((r2>) = q

*2 + n2

(5.23)

The squared observed returns are thus biased by the positive amount of rj2.

Empirical measures of (r2) are not only biased but also contain a stochastic error, which is defined as the deviation of (r2) from its expectation q2. The variance of this stochastic error can be formulated

(V) - Q2)1 = E ((r2)2 - 2 (r2> q2 + qa) (5.24)

The last form of this equation has the expanded terms of the square. The first term, (r2)2, can be explicitly written by inserting Equations 5.12 and 5.20; the other two terms can be simplified by inserting Equation 5.23. We obtain

£/-l)2

(5.25)

The first term is somewhat tedious to compute because of the two squares and the sum. We expand the squares to get many terms for which we have to compute the expectation values. All of those terms that contain r* or s to an odd power have a zero expectation due to the symmetry of the normal distribution and the independence of r* and s. The expectations of r*2 and e2 can be taken from Equations 5.21 and 5.22. The expectations of the fourth moments of normal distribution are

= 3[E(rf2)]2 = 3

(5.26)

E(sf) = E (*< ,) = 3[E(e?)]2 = \n* "(5.27)

as found in (Kendall etal., 1987, pp. 321 and 338), for example. By inserting this and carefully evaluating all the terms, we obtain

a2 = -±1 + 2( +2) Q*2 n2 + »2 + \»+] ,4 4 (528)

By inserting Equation 5.23, we can express the resulting stochastic error variance either in terms of q*,

a2 =

4 *9 7

(5.29)



or in terms of g,

a2 = -64 + r,4 (5.30)

n \n nA /

Now, we know both the bias 2 of an empirically measured (r2) and the variance of its stochastic error. For reporting the results and using them in the scaling law computation, two alternative approaches are possible:

1. We can subtract the bias rj2 from the observed {r2) and take the result with a stochastic error of a variance following Equation 5.30, approximating q2 by (r2). We do not recommend this here because rj1 is only approximately known and thus contains an unknown error. However, the idea of bias modeling and bias elimination is further developed in Chapter 7.

2. We can take the originally obtained value of (r2) and regard the bias rj2 as a separate error component in addition to the stochastic error. This is an appropriate way to go, given the uncertainty of rj2.

Following the second approach, we formulate a total error with variance CTt2tal, containing the bias and the stochastic error. The stochastic error is independent of the bias by definition, so the total error variance is the sum of the stochastic variance and the squared bias

<,,;,„ = + = \e+ (. + i-J).r* (53.)

This is the final, resulting variance of the total error of (r2>.

For the application in the scaling law, we can use a good approximation for large values of n ~ > 1, which is reasonable even for small values of . By dropping higher order terms from Equation 5.31, we obtain

"L. * -Q4 + I4 * ~ (r2)2 + I4 (5-32)

In the last form, the theoretical constant q2 has been replaced by its estimator (r2), see Equation 5.23.

The mean squared return with error can be formulated as follows:

<r2>wi.r,error = ( 2) ± Jri4 + ~ ( 2)2 (5.33)

V n

where the second term is the standard deviation of the error according to Equation 5.32.

The scaling law is usually formulated for ((r2))1/2 rather than (r2), as in Equation 5.10. Applying the law of error propagation, we obtain

v witherror - \r / 31 d{r2) « {

- (r2)\/2 . / I4 , {r2}



The scaling law fitting is done in the linear form obtained for log((r2))/2 (see Equation 5.17). Again applying the law of error propagation, we obtain

lo2(r2>/2 - lo2(r2>1/2 ± d°g<r2>/2 / ai + id>

¸\ /withcrror - 6\ / 31 d<r2)/2 V4(r2) + 2«

(5.35)

which gives rise to the following expression for the error variance of this quantity: Var(log(,-y/2) = 771 + (5-36)

The assumption is now that the variance of the error for log is approximately the same as that of log((r2))1/2 in Equation 5.36 and that we only need to replace there the empirically obtained ((r2))12 by the empirically obtained I . This approximation is justified by the similar sizes and behaviors of both quantities. We obtain

„4 1

Var(logAx) * --r + - (5.37)

4JAJcf 2n

This expression has interesting properties. Tn the case of long time intervals, I Ax I » rj, and the term \/{2n) becomes the essential cause for errors. Tn the case of short time intervals, n is very big but \i\x\ is of the same order as rj, and the first term of the right-hand side of the equation plays the central role. This explains the peculiar form of the errors in Figure 5.8, very large for high-frequency points, then diminishing (almost undistinguishable because of the high number of observations) and eventually increasing again when the number of observations becomes small.

5.5.4 Limitations of the Scaling Laws

We have already mentioned that the empirical results indicate a scaling behavior from time intervals of a few hours to a few months. Outside this range, the behavior departs from Equation 5.10 on both sides of the spectrum. Many authors noticed this effect, in particular, Moody and Wu (1995) and Fisher et al. (1997) for the short time intervals. It is important to understand the limitations of the scaling laws because realized volatility is playing more of an essential role in measuring volatility and thus market risk. It also serves as the quantity to be predicted in volatility forecasting and quality measurements of such forecasts, as we shall see in Chapter 8.

In the previous section, we saw that the bid and ask bounce generates an uncertainty on the middle price, and we have estimated its contribution to the error of the volatility estimation. For most risk assessment of portfolios, a good estimation of the daily volatility is required. Unfortunately, in practice, departures



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [ 59 ] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134]