back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [ 113 ] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150]


113

Case Study: Building an Advanced Trading System

jeffrey Owen Katz, Ph.D., and Donna L. McCormick

In this chapter, we are going to develop a fully mechanized, rule-based trading system using a component-based approach to neural network technology that also brings the human brain into the loop.1 Essentially, we will be conducting an experiment: We want to determine whether a pattern in a given market, which can be visually detected and marked on charts by the developer, can be learned and identified by a neural network.

A series of neural networks will be developed, each of which shall be responsible for identifying a specific pattern in the markets behavior. A set of rules will then be coded to provide the system with the criteria necessary for it to evaluate the information furnished by the networks outputs in the generation of trading signals. This approach (which we call the "Katz Click-And-Train Approach" or KCATA) is intended to take the subjectivity out of visual recognition of chart patterns, while still maintaining the human element in a fully mechanized system. The approach is also unique in that it uses neural networks in a way that capitalizes on the power of the technology by applying it in a manner consistent with its capabilities. This differs from traditional applications of this technology (what we have come to call the "neural newbie" approach) as well as other common methods of applying neural networks to trading. All charts and tables were produced on TradeStation®, version 3.5, a product of Omega Research,™ Inc. TradeStation® is a registered trademark of Omega Research, Inc.

Traditional Approaches

The "Neural Newbie" Approach

By "neural newbie" we mean a newcomer ("newbie") to neural networks who takes a neural network, throws a lot of mildly preprocessed data at it, and hopes that the technology itself comes up with something useful. Back in the late 1980s, we were all neural newbies because the technology itself was so new and we did not know how to apply it most appropriately.



We have learned from much experience that the neural newbie approach does not work well with current-day markets, although it may have had some success in the past. What may account for the change is that, when the technology was new, few traders were using it. In those days, there were still inefficiencies in the markets relative to simple neural network models that were not being taken advantage of by traders. The neural newbie approach could, therefore, detect and generate signals from these market inefficiencies. However, once neural network tools became the rage, everyone began using them and simple models were tried first. As more and more people traded simple neural network models, the patterns that neural networks could detect were "traded away."

Certain market inefficiencies can still be found by using this approach but, for the most part, they are not tradable (e.g., they produce too small a net return, if not a loss, after slippage and commissions are factored in). The failures developers had experienced with their application of neural network technology is most probably the cause of the technologys loss of popularity among traders. However, it is a mistake to blame such failure on the technology itself; blame should be placed on the ways in which the technology was applied. Other potentially profitable inefficiencies still exist in the markets, but more effort and more unique approaches to their discovery are required, as are more appropriate applications of the available technology.

Other Common Approaches

Over the years, we have experimented with many commonly known approaches that use neural technology and that are slightly more advanced than the neural newbie approach. For example, we have explored various kinds of inputs: used more heavily preprocessed data, the slopes of adaptive moving averages, wave sampling, Fourier transforms. We have also used inputs consisting of such kinds of data as the Commitment of Traders (COT) Reports, fundamental data, data bearing on intermarket relationships.

In addition, we experimented with different kinds of forecasting targets: Instead of trying to predict price change over the next several bars, we tried to predict simply whether a bottom or a top (defined in some simple, direct manner) would occur. We have tried to predict indicator values, as well as the future slope of moving averages.

None of these approaches produced a system with stunning performance; in fact, most of the time, the performance of such systems was fairly poor. We do not know why we never achieved complete success using these methods-perhaps, as happened with neural newbie models, such approaches simply became worse over time (we never bothered to investigate them from their inception onward); perhaps they never really worked at all. Apparently, these techniques are inadequate for capturing the inefficiencies that remain in todays markets.

Beyond Tradition

Because of our lack of success with traditional approaches, we backed off entirely from the idea of trying to use neural technology to predict the market in any direct



way, shape, or form. Initially, we focused our efforts on the development of systems that consisted entirely of rules, including rules that attempted to describe the kinds of patterns we were subjectively recognizing. However, we soon discovered that this use of rules was not as feasible as we expected.

One pattern that we recognized on the S&P 500 occurs when the stochastic oscillator has a particular kind of double bottom; this seems to foreshadow a very short-term move. However, when we tried to cast this pattern into simple rules (e.g., the bottom is defined as when the price before and the price after are higher), the rules did not properly represent the concept because the pattern is not perfectly smooth and there may be jiggles. When we tried to generate such rules to represent what we were subjectively identifying as a pattern, the rules caused a lot of wrong patterns to be marked, which we would not have marked ourselves if we were looking at the chart. So, in many cases, even though we are quite competent at disentangling a set of concepts into rules to differentiate and define features, many ambiguities had to be considered when trying to write rules to express subjective knowledge, even if it is in the form of a pattern that is very easy to detect visually. It was after the experimentation with using rules to detect patterns that we decided to attempt to have neural networks recognize patterns that exist in data within their view.

While experience has shown that neural networks cannot identify useful patterns if they are just given relatively raw data and told to find anything that is predictively useful, they can be excellent pattern recognizers, if handled in the right way. So, rather than directly training a neural network on raw market data to generate buy and sell signals, we decided to train nets to emulate a successful trader, that is, to identify the same patterns that a trader would identify on charts and use in trading. If we can subjectively mark the terminations of things like head-and-shoulders patterns, double bottoms, and stochastic hooks, perhaps we can train neural networks to accurately recognize such patterns in their window of view and to perform that same function in the future (identify patterns that we can identify by eye on charts).

Such thinking is what contributed to our decision to use the brain as the instrument that initially decides on the kinds of patterns that may be tradable, and to then use neural networks as a tool to objectively identify instances of those patterns when they occur. In essence, we are allowing the brain to serve as a trainer for the neural networks. Best of all, this keeps the objectivity in the trading system; anything that is subjective is transferred into the machine through use of neural networks.

In the procedure we are proposing, a person subjectively recognizes a useful pattern, for example, Pattern X, and identifies the instances of Pattern X on a chart. While Pattern X is considered to be a single pattern, it will have variations and, while the task set before the neural network may appear simple because it has only one pattern to learn, the task is still complex because of these variations. For example, Pattern X might be a particular kind of double-bottom pattern but, in some instances of its occurrence, the bottoms may be spread further apart, be closer together, one bottom might be higher than the other yet, subjectively, it is still the same pattern. In this way, the net may still have a fairly difficult job in learning Pattern X and all its variations, but the complexity of the job is a lot simpler than having the network find any and all patterns in all the many patterns that exist that are useful to the prediction objective.



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [ 113 ] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150]