Neural Networks in Market Analysis

Leading-Edge Quantitative Approaches – Chaos Theory, Neural Networks, and Genetic Algorithms

1992 Association for Investment Management and Research Annual Conference

Presented by Louis B. Mendelsohn

I want to give a quick background context in which to understand the emergence of neural networks. The computing capabilities that are now coming into existence are what’s really making this technology – the entire area of applied technologies – be something that’s realistic particularly the PC desktop level because they are very computational intensive technologies. Just in the last ten or twelve years we have really gone from the emergence of personal computers and now we are at the point where they are in a position to do some serious number crunching at a very cost effective level.

During this time period, over the last decade – my background and expertise is really primarily in the futures market so I will speak to it from that vantagepoint – a software industry has emerged so to speak to support the trading activities of people in that industry. The nature of that software, the genre of that software that has developed in the 1980’s, is essentially what I call a rule-based approach wherein an expert defines specific buy and sell type trading rules, performs a variety of technical indicators, or applies technical indicators to the market data that he or she is looking at, and uses that as the basis of making trading decisions.

In the early 1980’s – at the PC level – the notion of historical testing started to emerge. This is the idea of taking past price data, doing some manipulation on it in some fashion – the predominant means being what’s called historical optimization where you more or less tweak the various variables that you are using in the trading rules that you deploy – and applying it to a specific market . The bottom line really is that these approaches are essentially looking to do two things.

They’re looking to recognize recurrent market conditions. Obviously the notion being that to some degree history is going to repeat itself and if that’s the case then looking back at history is going to give you some insight into the future and you will benefit from that. Of course, if that’s true then you want to be in a position to be able to predict that future behavior with some reasonable degree of accuracy in a timely fashion.

There are a number of characteristics and limitations that this entire rule-based approach has to it. First of all because of the fact that it is dependent upon an expert’s imposition of trading rules, the relationships within the data, the patterns are not really something that’s discerned from the data, it’s a case where rules are imposed on the data. And of course, the success of that type of an approach is entirely dependent on the knowledge-base that that expert has, and the degree to which those rules successfully model the markets’ activities.

Typically, because of the number crunching activities particularly in the futures markets, a technical approach has been the predominant means by which analysis has been performed – with fundamental analysis getting a back seat position basically on the notion that the fundamentals are already factored into the price activity therefore why pay any attention to fundamental data in the first place. One of the limitations that these rule-based systems have is that they typically only look at one market in isolation.

If you are trading treasury bonds you do historical testing, you get all of your historical data on treasury bonds, and you run all these technical studies and optimizations and so on. But basically you’re focusing on the past price history of treasury bonds in an effort to be able to model that market for the future. There is no real means or capability for looking at the impact of other related markets on the target market that one is trading. And, of course in the context of the emergence of globalized trading over the last few years, with the advancements in satellite communications capabilities, that type of an approach is certainly very limited and falls short of what I think needs to be done in order to be able to make best use out of the information that is available.

Most of these model techniques are in fact linear-oriented. They typically restrict themselves to studies which have a lot of redundancy to them and that causes some problems. Of course, as a result of that, they tend to fall apart. They don’t have a lot of stability into the future. They work for a short period of time and then start to degrade. That brings us to the 1990’s. These are the limitations and the characteristics of the genre of software of the 1980’s. Now, with the more advance hardware platforms that are becoming available we come to the applied intelligence technologies of neural nets, chaos theory, genetic algorithms, expert systems, and so on.

I just want to go in very briefly into a discussion of what some of the characteristics of neural nets are, and to be able to make a comparison between them and the rule-based systems, so you can see how the evolution is taking place and what some of the benefits are that neural nets represent in market analysis. First of all, they don’t require any rules. It is not a case where an expert sits down and write a number of rules and then imposes it on the data. There are no rules that are written down. The neural net through a process known as training, takes the wealth of data that you’ve provided to it and is able to discern the patterns and relationships within that data that can lend themselves to predictiveness. That is a certain advantage that they have in that respect.

They are not limited to technical data inputs. Neural nets have the facility to be able to utilize as raw inputs fundamental information, technical information, and certainly information on related markets or inter-market. Of course that allows the technician to use neural nets to look at the total context in which a particular market is existing. So that if you are trading treasury bonds you don’t need to restrict your analysis to the past price history on treasury bonds. You might look at the CRB index; you might look at the S&P 500, foreign currencies or whatever related markets or market you might perceive to be related to treasury bonds would have some bearing. In fact the area of genetic algorithms is a means that allows someone to look at how to actually decide what data is the right data to be using in a neural net. You can use that to make those kinds of decisions so that it’s not arbitrary.

Neural nets do not require any historical optimization of technical indicators or studies. So the notion of curve-fitting and that whole problem area is dispensed with. As I said, because they aren’t dependent on an expert developing trading rules, neural nets are not dependent upon that person’s knowledge-base or perceptions of what makes the market tick as a basis for their effectiveness.

Essentially they have the means to model the market without a person really fully understanding the market himself. The neural net will sift through the data and find those patterns. From an architecture standpoint neural nets are essentially broken up into what is known as an input layer, hidden layer, and an output layer. The input layer being that layer which takes in the information that you are providing for the neural net from the outside world, whether it’s price data, fundamental data, whatever. The hidden layers are internal layers that are used in part for the processing that the neural net goes through during the training process. The output layer is where the outputs come out so to speak, those are the predictions or the particular phenomena that you are looking to forecast.

There are connection weights which connect the various neurons that are in each of these layers and essentially in a neural net, no two neurons in the same layer talk to each other. Simply neurons in one layer talk to adjacent layer neurons. There is a very structured architecture to the communications paths that exist within the neural net during the training process. That’s what gives it its ability to sift through the data, find patterns that exist in the data, and make an effort to discern what pieces of input information are relevant to being able to improve the predictiveness of the neural net.

All of this takes place under the effects of a particular learning law that’s applied to the training process. There are a number of them and that’s what governs the adjustments that are made to these connection weights so that the training process will take place in an appropriate fashion and you will be able to successfully train a neural net. This is basically what I just described. The input layer here, a hidden layer, and then an output layer. Of course the number of neurons in each of those layers are choices that the designer of the neural net makes: how many different pieces of raw data do you bring in the neural net, how many hidden layer neurons you deploy, and what you are looking to forecast – how many output neurons you’re looking for.

In fact the whole notion of picking data, picking the outputs that you are looking to predict, and selecting the appropriate data inputs is a very, very critical facet of designing a neural net. It’s really more of an art than a science. You have to really to some degree go through a trial and error process when making these kinds of decisions. But as I said it’s not just prices. In the futures markets it would be volume, open interest, intermarket information, fundamental information, event type information – the day of the week, the day of the month – anything that might have some predictive bearing on the performance of the neural net. Of course, you definitely want to minimize the redundancy. There are a lot of discussions about neural nets that kind of suggest that the-more-the-merrier, the more data you put in, it doesn’t matter, it doesn’t hurt the net.

We have actually found that to not be the case. You don’t want to use a lot of redundant data or a lot of redundant means of processing data. That doesn’t really help the net; it just kind of bogs it down and makes it more difficult to train and just really takes a lot longer to train. In respect to the input data that’s provided to the neural net, it is not something where you can just put in raw data. Unfortunately, it’s a lot more complicated than that. The data has to be manipulated – what we call pre-massaged – before it can really be inputted into a neural net for training purposes. The selection of the data and the manipulation of the data is really the key to being able to successfully design and train a neural net in it’s application to financial market analysis. There is a lot of work that needs to be done in this area.

In our own firm we have neural nets that are designed to trade primarily the treasury bond market and a number of foreign currencies as well as the OEX market. We have developed a number of customized, statistical measurements that we use to do that manipulation – or data transforms as we call them – and as I said this is a very sophisticated area that really requires a lot of research and trial and error on the part of the developer. There are a large number of architectures that can be used. I’m not going to go into them. One of the ones that we’ve used to a great degree is the back-propagation method. In some respects you may very well end up using more than one neural net in a particular application. You may use a neural net such as self-organizing maps to help select data, and then you may use the back-propagation neural net for the actual training of the net. So it’s not an either or situation.

There are a number of issues that need to be dealt with during the training process, to assure that the neural net is trained properly because there are some technical issues that need to be worked through. Neural nets can be trained and then subsequently tested on out-of-sample data, with a walk-forward test. As a result of that, there is oftentimes a need to go back in and sometimes redesign the architecture of the net or alter the number of hidden layers or hidden neurons. It is an iterative process where you don’t just go in, throw some data into a neural net, train it in two minutes and then you’ve got yourself a working neural net.

It’s a very laborious task. It can take a long period of time. It’s not so much the computational time that’s involved, it’s really the assessment of the outcome, the performance of the net, and the need to go back and tweak the architecture of it before you go back for further retraining. In the case of the back- propagation net, what actually happens is, as I mentioned, you have these connection weights that actually connect each of these neurons that talk with one another connecting the neurons from one layer to the next.

In the training process what happens very simply is the neural nets starts off with these connection weights randomly set. Obviously because it’s training on past data, it knows what the value of the predicted output is that you are seeking it to be trained on. As it goes through the data on an interative basis it begins making predictions, and then of course it is in a position to compare it’s predictions with what the actual output value really was. Of course it can compute an error, and based upon the learning laws that are being applied, the neural net will go back and there will be a backward flow of information through the net back through to all of the various connection weights that are connecting the various neurons to one another. And they will be altered in value to reflect the fact that the net had made an error in it’s prediction.

What happens on an iterative basis simply is that through this training process these connection weights are constantly being changed in value as the neural net’s prediction errors are still evident when compared to the actual value. Ultimately when it’s successfully trained, the overall error of the neural net will be minimized. And it will have learned what impact each of these various inputs have on determining the output that you’re seeking and that’s where the predictiveness comes in and the use of the intermarket data comes in terms of being able to allow the net to have knowledge of that marketplace and be useful to you in terms of being able to predict future values.

As I mentioned, the hardware is the key issue. There has been obviously not only the 486’s and of course the 586’s that will be coming on, but the workstations and even specific dedicated boards that are now being developed particularly for the purpose of training neural nets. So the hardware is definitely coming on line, and this area will certainly get a lot more support from that standpoint which will make it a lot easier and more cost effective to apply neural net technology to a variety of issues outside of the investment industry, but of course within the industry as well.

Presently there are a handful of generic, what I call plain vanilla, neural training simulating programs that are on the market that are available. Neuralware in Pittsburgh has one, HMC out in San Diego has a product, California Scientific in California has one called Brianmaker. They all basically have the same kind of capability – they each have different bells and whistles – but essentially they are geared toward generic training of neural systems without any specific regard to the financial industry or market analysis persay.

If a person wanted to deploy one of those software packages at this point, it would really be a fairly steep learning curve in terms of the time that you would need to commit to the project because it will take that much time and like I said there is just a lot of trail and error work involved. There isn’t really a good cook book approach to doing it. It’s really a research project, and if you wanted to develop your own neural training software you would of course need significant programming expertise.

What we did with our firm a few years ago, I began moving away from rule-based approaches to financial analysis in the futures area and I formed a small research division of my company to look into the neural net area. We have been working that area for the last two years or so. And last year we began to market commercially to the industry pretrained neural nets that were applied to these various financial markets, the currencies, T-bonds, etc.

They were pre-trained in that we had completely designed the architecture of the nets, made the selection over what inputs to bring into the nets, the data transforms that we would make, the training process itself, and essentially what we gave our clients is a simply a canned neural net that simply trades that one particular market and makes predictions for that one market. These programs that we’ve developed definitely use intermarket analysis. They look at anywhere from eight to fifteen related markets that would be related to the target market that the net is trained on. They deploy your fundamental as well as technical data. And in some cases they don’t only involve one neural net. They may very well involve four or five neural nets, each one predicting a separate, distinct output. And in one case we are actually using several layers of neural nets where outputs from one set of neural nets are actually used as inputs into another neural net.

Where are we going from here. To sum it up the hardware is definitely coming on board. We’re going to see increasingly more sophisticated hardware capabilities and that will certainly make the development of neural nets a much more feasible area for the next few years. I think that because of the fact that neural nets can combine both technical and fundamental data, the whole artificial separation between technical analysts and fundamental analysts is going to start to diminish. Through the structures that neural nets can have, technical and fundamental information will be able to get used side-by-side inputs into a trading approach. We will see neural nets will have the capability to do intra-day analysis and possibly even on-the-fly re-training during the day on several markets simultaneously. Again this is all just simply hardware dependent. The knowledge is there. It’s simply a matter of having the hardware that can support it.

Ultimately, it won’t simply be an issue of neural nets. It’s going to be the whole area of applied intelligent software, the combination of neural nets, expert systems, genetic algorithms, all of these various intelligent technologies coming together and having a role in being able to be used for more sophisticated market analysis.

Thank you very much.