back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [ 93 ] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134]


93

Momentun

FIGURE 9.2 The nonlinear function for computing the momentum indicator. This function is presented for different values of the parameter p.

where °\ , xc) are normalized momenta of order of returns, m„\ax is the maximum value the indicator can take, and the power p is the accentuator of the indicator movements. In the case of price indicators, p must be an odd number to keep the sign of the moving average. The shape of the nonlinear function is illustrated in Figure 9.2 for different powers p and for a mmax of one. This functional shape illustrates how the indicator plays the role of a primitive trading system. If the momentum has a high positive or negative value, the indicator zx saturates, which is when the indicator is fully exposed in a long or short position. The power p both plays the role of a threshold (no threshold if p - 1) and influences how the model approaches its full long or short position. The nimaX value plays the role of the quantity of capital invested and also influences the shape of the indicator function.

The definition given in Equation 9.11 can easily be extended to other types of problems. For instance, the same definition can be used for constructing indicators for the intrinsic time in the #-scale, zT(Ai}r, #c) where the parameters are now defined as functions of xc computed using Equation 9.7 on the tf-scale. The function x{&) is a monotonic positive definite function so that all of its momenta are positive.7 When the function is raised to an even power, only the upper right quadrant of Figure 9.2 becomes relevant. The primitive trading system analogy does not work in this case but the emphasis on large movements can be avoided by leveling off the indicator.

In the implementation of this algorithm, the indicators are continuously updated. Every new price received from the market makers causes the model to

7 Time never flows backward.



recompute all its indicators for all the horizons. It then updates the forecasts for each time horizon.

9.3.5 Continuous Coefficient Update

The forecasting model environment is such that the indicators and the corresponding coefficients are continuously updated. Each coefficient (cxj, cTj) is updated by estimating the model in the most recent past history. The length of the past history is a function of the forecasting horizon.8 The motivation for horizon-dependent finite samples for optimization is motivated from the fact that there are different regimes in the market and short-term horizons are particularly sensitive to them. Furthermore, short-term traders are not influenced by a past much older than 3 months. The use of samples extending into the far past to optimize short forecasting horizons will make the model less adaptive to regime changes.

Adaptation to long-term regime or structural changes is enabled by re-evaluation of the optimization as soon as enough new information becomes available. The optimization sample size is kept fixed (in ??-time), rolled forward, and then linear regression is reapplied to the new sample. This technique is similar to that used in Schinasi and Swamy (1989) and Swamy and Schinasi (1989) except that we use a fixed sample size while they add on the new data to their sample. The model is optimized through the usual generalized least-square method, except for two modifications.

Our forecasting models run in real time and the continuous reoptimization can generate instabilities (rapid jump from positive to negative forecast) when standard linear regression techniques are used. The instabilities originate from both the indicators and their coefficients in the linear combination (Equations 9.8 and 9.9). The indicators are moderately volatile and we avoid indicators that are too volatile by limiting the power of the exponents in the indicator construction to 3 for simple momenta and 7 for higher momenta. Moderately volatile indicators can cause instabilities only if their coefficients are large. The coefficients are less volatile than the indicators (due to the large optimization samples), but they may have high values if the regression by which they are optimized is near-singular because of high correlation between the indicators. Within a particular sample, the high positive and negative coefficients typical in the solution of a near-singular regression matrix would balance each other out. However, as soon as these coefficients are used with changing indicator values outside this sample, the equilibrium is lost and the high coefficients may boost the forecast signal. We have already eliminated one source of near-singularity by avoiding indicators that are too similar in the same forecast.

The standard regression technique is applied under the assumption of precise regressors and a dependent variable with a Gaussian error. Our regressors (indicators), however, originate from the same database as the dependent variable (the return); thus, they are prone to database errors (missing data, badly filtered data, and so on) and to errors in the construction of the ft and time scales. Taking into

" A few months for hourly forecasts, up to a few years for 3-month forecasts.



account the regressor errors allows a solution to the problem of near-singularities in a natural way. Instead of considering the jh regressor zjj at the i,h observation (where we have dropped the variable index and the horizon for ease of notation), we consider the imprecise regressors z,jj = zjj + Sjj where Sjj is the random error with variance of q2 times that of zj We call the small parameter g the typical relative error of the indicators and we assume it is roughly the same for all indicators of the type we defined in Section 9.3.4. Without going into the details of the calculation, such a change modifies the final version of the system of equations. The kh equation can be written as follows:

where N is the number of observations used in the regression, m is the number of indicators (same as in Equation 9.8), uj, is a weighting function depending on the type of moving averages used (here it is an exponential), and <5 is the usual Kronecker symbol: 1 for j = and 0 for j k. The quantity , is the usual response term of the regression: ( / + At?/) - *(#;)• There is only one addition to the original regression: the diagonal elements of the system matrix are multiplied by a constant factor 1 + q2, slightly greater than 1.

The effect of increasing diagonal values of the original matrix by the factor Q2 is to guarantee a minimum regularity of the modified matrix even if the original one is near-singular or even singular. The variable q2 can be interpreted as the parameter of this minimum regularity. This desired effect is also accompanied by a slight decrease of the absolute values of the coefficients cj, because the right-hand side of the equation system remains unaffected by the modification. The decreases are insignificant, the only exceptions being for coefficients inflated by near-singularity in the original regression: there, the absolute values decrease substantially, which is what we want anyway.

The other departure from the usual regression technique is a modification of the regression response , necessitated by the leptokurtic behavior of returns. The forecast signals are much less leptokurtic than the returns, hence the optimization is dominated by exceptionally large real price movements rather than the "normal" price movements. This is also accentuated by the fact that it is squared returns that enter into the computation of the least square fit. Furthermore, the users of our forecasting models are more interested in the correct direction of the forecast than in the absolute size of a return forecast. A pure linear regression is thus inappropriate.

The minimization of the sum of squared deviations, however, has an important advantage: it can be reached by solving a system of linear equations. Theoretically though, the least sum of squares could be replaced by any utility function. Our problem is thus to find, within the framework of the regression technique, a more appropriate optimization (or utility) function. The best way to achieve this goal

J2CJ (1 + Q1 &jk) J2Wi hi Zk.i = ;=i i=\

2 w>Yi zk->

(9.12)



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [ 93 ] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134]