Fundamental Analysis Meets the Neural Network

By Lou Mendelsohn and Jon Stein

In the 1980’s, futures traders embraced both technical analysis and computer technology.

The analysis part has been largely “single-market” – analyzing one market using only that market’s past data. Numerous computer software programs refined this practice by “optimizing” the technical indicators used to generate trading signals.

Today technology has advanced to the point that these traders have begun using next-generation predictive tools such as neural networks to develop systems. But developers so far have directed little research toward designing systems to synthesize seemingly disparate technical and fundamental data into a disciplined trading system.

If you regard the integration of the world’s financial markets through modern technology as a fait accompli, coupled with the emergence of new tools, the shortcomings of the “single-market” approach become apparent. With the prospect of improved trading performance, some traders are beginning to take a serious look at using neural networks to add multiple-market – intermarket and fundamental – data to a predictive system.

Trading systems developed via a neural network are different than those created or programmed by the human thought process alone.

Neural-based systems comprise layers of interconnected cells or “neurons”. Three neuron levels – input, middle and output – usually exist. In one system design known as “feed forward” with “back propagation” learning, neurons from adjacent layers connect to one another. Mathematical weights or “connection strengths” are assigned to these links.

Through a process known as “training,” these weights are created and altered. Neural-based systems actually learn by making generalizations about the world, then changing the weights when told the generalization is wrong or right. Historical data is used for this teaching.

Art meets science

Developing a neural-based system that predicts markets is both an art and a science. Proper system design is critical to performance.

First, appropriate data inputs must be selected. These inputs are preprocessed via various statistical and analytic techniques. Later, during training, they are coupled with known results (in historical data) that the system is challenged to forecast.

Training with market data consists of an iterative process through which the neural network discovers hidden patterns in the data. It records errors in judgment by comparing the forecast with the known results. Word goes back through the layers, modifying connection weights between neurons. A system is fully trained when its overall error level is minimal.

Both accuracy and trading performance can be evaluated via a process called “walk-forward,” or “blind-simulation” testing, in which the system is trained on data from one time period and tested on another.

Because training is computation-intensive for nearly all applications, a 486/33 machine with an accelerator board capable of performing several million connections per second is needed for training and testing.

Once a system has been trained, developers can use an IBM-AT or 386 computer to predict future prices and generate real-time signals, simply by providing the system with updated inputs. For on-line training and real-time updating with intraday data, a more powerful platform is required.

The impact on trading performance when intermarket and fundamental data inputs are incorporated into a neural-based system can be shown in two systems developed for the Deutsche mark futures market at the Chicago Mercantile Exchange.

Both trained over the three-year period from July 1, 1987, to June 30, 1990, and tested on identical price data from July 1, 1990, to June 30, 1991, for an apples-to-apples, walk forward comparison of results.

Each neural system actually consists of three preliminary neural-based systems trained to predict the 5-day, 10-day and 20-day trend direction. These predicted outputs then go into a final system that generates buy and sell trading signals. The left-hand table above summarizes the data inputs for both systems.

System 1 employs only single-market data for the D-mark: open, high, low and closing prices, plus volume and open interest figures.

System 2 is a “multi-market” system. It uses the same technical data inputs as system 1 plus intermarket and fundamental data inputs thought to be related to the D-mark market. The data includes:

  • Daily interest rate differentials between cash three-month Euromark and cash three-month Eurodollar closes from the Financial Times.

  • Daily intermarket currency cross rates between D-mark and both British pound and Japanese yen futures.

  • Weekly close for both the U.S. 10 year Treasury note yield and the German 10-year government bond yield.

  • Weekly Bundesbank reports on the German current account balance.
  • Monthly economic data on Germany’s “new” combined money supply figure known as “M3.”
  • Monthly economic data on the German wholesale price index reported by the Federal Statistics Office.

While the technical price data on futures is readily available from vendors, fundamental data is more elusive. The price data for this test came from Commodity Systems Inc. Fundamental and cash data came from Datastream International.

All data inputs for both systems were preprocessed with proprietary statistical analytic techniques used in the neural-based systems developed by Mendelsohn . This includes statistical procedures for smoothing data as well as measurements of randomness and trendiness.

“Preprocessing” of data is critical for finding hidden relationships and patterns within the data. In system 2, for example, changes in Germany’s wholesale price index were fed in as measurements of inflation. Differences in inflation rates between Germany and the United States were fed in as well (basically the natural log of the ratio of the two rates).

Because both systems were trained to predict the same outputs, then were tested over the same period on identical data, differences in performance can be attributed to their data inputs and preprocessing.

The training and testing statistics for the two systems are depicted in the right-hand table above. A $100 round-turn overhead allowance was made for slippage and commissions.

A simple money management stoploss procedure was applied to both final systems: On the first day of a new trade, a full one-point intraday stop was entered; on subsequent days the stop was tightened by 10 ticks each day. If the next day’s open was beyond the stop, the system exited the trade on the open; otherwise, it was treated as a stop during the day’s trading. If the trade was not profitable by the fifth day, the system exited on the open on the sixth day.

System 1 – using only price, volume and open interest data – advised 24 trades with a total net profit in the one-year test period of $13,252 (after commissions and slippage). Maximum drawdown was $2,537. The most consecutive losing trades was three. The average losing trade was $555, an average winning trade $1,659.

System 2, using intermarket analysis and economic data, had only 15 trades, producing $19,226 in net profits during this same time. System 2 had the same number of consecutive losers and maximum drawdown as system 1. It had an average losing trade of $648 and an average winning trade of $2,970.

The three preliminary systems that predicted 5-day, 10-day and 20-day trend directions for system 2 were quite accurate, perhaps due to the correlation of economic and intermarket data to trend change. When the market moves strongly one way, system 1, due to its limited and myopic view, will only predict moves in the direction of the trend. However the added data in system 2 seemed to make it more adept at predicting price correction and trend change before it occurred.

The winner is…

Thus, the final network of system 2, which generates the buy and sell signals, was superior in identifying the overall trend and turning points. By telling minor price moves from prevailing trend, it either maintained the existing position or stayed out. This resulted in fewer closed-out trades, a slightly better percentage of winning trades and a significantly larger average winning trade.

These results suggest single-market analysis, even with a neural system, is too limited. In today’s global markets, the impact of intermarket and fundamental factors must be worked into your trading strategy. For technicians reluctant to take this step, neural-based systems represent an ideal way to add them.

For practical applications, more robust neural systems can be designed. Other relevant data pertaining to the traded market may prove useful, such as data on the German discount and Lombard rates.

“Genetic” algorithms that identify the best preprocessing procedures, more elaborate system architectures or different training algorithms and neuron sensitivity analysis can be used to “tweak” performance.

As next-generation predictive technology is applied to financial markets, artificial intelligence such as neural networks should change the nature of technical analysis. Neural-based systems can even be amalgamated in rule-based expert systems.

The D-mark system is just a sample of things to come. Neural nets have the potential to be the most significant advancement in market analysis since the advent of the computer.

Lou Mendelsohn is president of Market Technologies Corporation in Wesley Chapel, Fla. He develops neural trading systems for financials.