back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [ 23 ] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81]


23

Chapter 7: Construction

How does this relate to trading bands or price envelopes? To my mind, trading bands are constructed above and below some central point, usually an average. Envelopes are constructed without reference to a central point-for example, moving averages of the highs and lows or curves fit around key highs and lows a la Hurst.

When it comes to trading bands, the problems are clear. The widths for percentage bands have to be changed from issue to issue in order to work; even for the same issue the bandwidth has to be changed as time passes in order to remain effective. Marc Chaikin had shown us one method of estimating the proper bandwidth; his Bomar Bands shifted a 21-day moving average up and down so that they contained 85 percent of the data over the past year. While this served his purposes well, for our purposes the price structure evolves more dynamically than the long lookback period of Bomar Bands allows for. Experiments in shortening the Bomar Band calculation period suggested that the calculations break down in short time frames. Marc Chaikin had hit the nail on the head with his decision to consult the market regarding the proper bandwidth, but what was needed was something that was more directly adaptive.

My first interest in the securities world was options. Analysis of options, whether options embedded in convertible bonds, warrants, or listed options, all turned on the same issue, volatility-specifically, an estimate of future volatility. The key to winning in that game was simple to grasp-but hard to use; you had to understand volatility better than the next person. Indeed, volatility seemed to be the key to many things, and so I studied volatility in all its forms: historical estimates, future estimates, statistical measurements, etc. When it came to trading bands, it was clear that in order to achieve success, the bands would have to incorporate volatility.

Once volatility was identified as the best way to set the width of trading bands, there were still a lot of choices. Volatility can be measured in many ways: as a function of the range over some period of time, as a measure of dispersion around a trend line, as the deviation from the expected-the list is literally endless.1 After an initial scan, a list of seven candidate measures was settled upon. Early in the decision process it became clear that the more adaptive the approach, the better it would work. Of all the



Part II: The Basics

measures examined, standard deviation (sigma, a) stood out in this regard.

To calculate standard deviation you first measure the average of the data set and then subtract that average from each of the points in the data set. The result is a list of the deviations from the average-some negative, some positive. The more volatile the series, the greater the dispersion of the list. The next step is to sum the list. However, the list as is will total to zero, because the pluses will offset the minuses. In order to measure the dispersion it is necessary to get rid of the negative signs. This can be done simply by canceling the minus signs. The resulting measure, mean absolute deviation, was one of the calculations that were initially considered. Squaring the members of the list also eliminates the negative numbers-a negative number multiplied by a negative number is a positive number-thaf s the method used in standard deviation. The last steps are easy-having squared the list of deviations, calculate the average squared deviation2 and take the square root (see Table 7.1).

While squaring the deviations has the benefit of allowing the rest of the computation to proceed, it also has a side effect: The deviations are magnified. In fact, the larger the deviation, the larger the magnification. There lies the key. For as prices surge or collapse and the deviations from the average grow, the squaring process inside the standard deviation calculation magnifies them and the bands efficiently adapt to the new prices. As a result it almost seems as if the bands chase after price. Do not underestimate this quality. It is the key to the bands power to clarify patterns and maintain useful definitions of what is high and what is low.

Table 7.1 The Population Formula for Standard Deviation

where

x - data point

Ii - the average

N=the number of points



Chapter 7: Construction

The defaults for Bollinger Bands are a 20-day calculation- approximately the number of trading days in a month-and ±2 standard deviations. You will find that as you shorten the calculation period, you will need to reduce the number of standard deviations used to set the bandwidth, and that as you lengthen the number of periods, you will need to widen the bandwidth, as discussed below (or via the traditional method discussed in Chapter 4 of Part I).

The reason for the adjustment has to do with the standard deviation calculation itself. With a sample size of 30 or greater, ±2 standard deviations should contain about 95 percent of the data. With a sample size of less than 30, we really shouldnt be using the term standard deviation, but the calculation is sufficiently robust that it works anyway.3 In fact, the bands contain near the amount of data one would expect them to all the way down to a sample size of 10. But one has to allow for changes in the bandwidth parameter as the calculation period shrinks and the results of the calculation change character to keep the containment constant.

The traditional approach to this was to use the data presented in Table 3.2, scaling the bandwidth between 1.5 and 2.5 standard deviations as the calculation period increased from 10 to 50. However, in preparing this book, a number of markets were tested to see whether that table still held true. It turns out that much smaller adjustments need to be made these days. Six markets were tested: IBM, the S&P 500 Index, the Nikkei 225 Index, gold bullion, the German mark/U.S. dollar cross rate, and the NASDAQ Composite. Ten years of data was used for everything except the mark, for which eight years of data was used. We calculated 10-, 20-, 30-, and 50-period Bollinger Bands. The bandwidth for all was then set to contain 89 percent of the data points, the average amount contained by the 20-day bands for all six series.4

The test results between the markets were very consistent. Based on those test results, as a general rule I recommend that if you use a starting point of 2 standard deviations and a 20-period calculation, you should decrease the bandwidth to 1.9 standard deviations at 10 periods and increase it to 2.1 standard deviations at 50 periods (see Table 7.2).

These adjustments are dramatically smaller than those previously recommended. There are likely numerous factors at work, a larger sample size and a better testing methodology or



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [ 23 ] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81]