PO Box 323
Boulder, CO 80306
(303) 442-3539

Product: Application combining graphic user interface with tools needed to create, train, test and embed neural networks.

Requirements: IBM compatible machine with 80286/386/486 central processing unit (CPU). An Intel 80287/80387 numeric coprocessor is not required but will decrease training time. Minimum of 640K RAM and DOS 3.0 or higher. Can be accessed for processing large input/output (1/0) data sets if extended memory is available. EGA/VGA monitor necessary. Mouse is recommended but not required.
Price: $495 for the DynaMind Developer, which includes DynaMind version 3.0 and NeuroLink version 2.0. DynaMind 3.1 can be purchased as a standalone application for $195 without the NeuroLink library.

Neural networks represent an innovative technology that has recently generated much excitement in the financial industry. Their ability to adapt or generalize to new situations from provided examples gives them an advantage over other methods of forecasting and modeling that follow sequential patterns or require expert knowledge. To arrive at the best choice of inputs, preprocessing, architecture, training and testing for a specific neural network application, often it is necessary to use several development tools in combination. This is particularly true given the different capabilities and functionality offered by such tools coupled with the technical challenge of designing neural networks capable of making highly accurate forecasts in the financial markets. DynaMind Developer is one neural network development package that we have found useful with which to build prototypes of neural networks.

DynaMind is a software package that can be used to create, train and implement neural networks for various applications. Two easy-to-follow install programs, located on the two diskettes included with DynaMind Developer, made installation simple. The programs followed the usual pattern of questions – what drive to install on and could the default directory name be used. Installation was not only simple, it was fast; on a 486/33 it took approximately two minutes. Not only that, the diskettes included all files and example nets necessary for the tutorials.

The DynaMind Developer package includes four major components that assist in creating neural networks and neural network applications. The first, DynaMind 3.0, is the heart of DynaMind Developer and is used to design, train and test a variety of neural networks. The second component, NeuroLink 2.0, is a C-code library that allows software developers to link networks created in DynaMind 3.0 with their own software. The third component is a preprocessing filter that uses discrete Fourier transforms (DFF). The fourth and final component allows various neural network hardware devices to be emulated for testing purposes.

DynaMind 3.0 combines a sophisticated graphic user interface with the tools needed to create, train and test neural networks. With a mouse, DynaMind’s pop-up windows and menus allow the user to quickly and easily maneuver through the various screens to select desired functions. One of DynaMind’s most useful features is its ability to display the progress of the training graphically in real time. A number of options are available for displaying the training results on the screen. One option depicts a bar chart that plots the network error, which is defined as proportional to the squares of each output neuron’s error:

Another option for displaying the training results involves a profile for each output. To do this, a graph that shows the network output, the target output and the error for each of the neurons in the output layer is presented. Both output displays can be viewed simultaneously in a split screen (see Figure 1), but this will increase training time.

While the network trains, DynaMind 3.0 provides a substantial amount of feedback, the graphic representation of which makes it easy to interpret. Another useful function provided by DynaMind is the ability to pause training, enabling us to change certain parameters and then resume training to see what effect this has on the network.

Another way that DynaMind displays network performance graphically is the Weight Viewer (Figure 2), which provides a scaled window that reveals the network weights and the magnitude of their connections. Periodically, training can be halted to assess the status of the network weights at various stages in the training process (Figure 3).

DynaMind automatically finds and uses extended memory if available and uses extended memory supports feed forward networks and I/0 data sets that may require more than the standard 640K. To access even more memory, we configured DynaMind to use virtual memory from our hard disk with the Allmemory command-line option. Virtual memory uses half the available hard-disk space on the current drive.

NeuroLink 2.0 is a library of C-code routines to help software developers create applications based on custom neural networks. By calling NeuroLink routines, networks generated with DynaMind can be linked to DOS-based systems, C and C++ programs. NeuroDynamX, the creator of DynaMind, indicates that any number of networks trained with different algorithms can be linked together, limited only by memory constraints. In turn, these linked networks can be embedded royalty-free into commercial software applications. Programs using the NeuroLink 2.0 library can be compiled with Borland Turbo C 2.0, Turbo C++ and Borland C++, using the large-memory model.

DynaMind can create a ready-to-run neural network filter designed to perform a discrete Fourier transform (DFT). The discrete Fourier transform is defined by the formula in Figure 4.

This transformation is useful when a preprocessing network is needed to convert a time domain sample into a frequency domain sample and when working with sound classification problems, for example. These filters can also be embedded into commercial programs using the NeuroLink 2.0 library.

DynaMind is capable of emulating various neural network hardware devices such as the 8017ONX electrically trainable analog neural network (ETANN) chip, and the ETANN multi-chip board (EMB) from Intel.

According to NeuroDynamX, hardware devices emulated with DynaMind can be written later to the actual chip(s) using the Intel neural network training system (iNNTS). To date, however, I have not worked with this feature.

DynaMind has three training paradigms from which to choose. The first two, Back Prop and Madaline III, are the most widely used. The network architecture for these two consists of multiple layers of neurons, the outputs of which feed into neurons in subsequent layers (Figure 5); this is known as a feedforward topology. TrueTime, the third system, is a proprietary recurrent training algorithm that can be applied to time-dependent problems such as stock price forecasting, sequence recognition and sales projections. Unlike most networks that use multiple layers, TrueTime consists of a single layer of neurons, completely interconnected with input connections from an external input array. This type of topology is known as recurrent or feedback. Previous inputs affect the system’s current output state, which in turn alleviates the need to preprocess the data.

DynaMind uses an IOBuilder program to create I/0 files to use with its neural networks. An IOBuilder program will accept standard text files from a spreadsheet or word processor and parse them into segments for the neural networks to train on. The program gives the option of creating four different types of I/0 files, the first three of which are used with the Back Prop and Madaline III algorithms. The fourth type of I/0 file is designed for use with the TrueTime algorithm. We used a text file from Microsoft Excel as our I/0 file. The IOBuilder program was easy to use and generated an error message whenever the data was formatted incorrectly, preventing the possibility of training a network with a bad data file.

Feedforward topology, a well-known and widely used network type, consists of layers of neurons that feed into subsequent layers. Data used with these networks must be preprocessed extensively, as these network types cannot process time-varying information.

Currently, I am testing different input formats using DynaMind’s TrueTime paradigm. Recurrent, or feedback, networks require more time to train because the outputs of the network feed back through the single layer of neurons. There is no need to preprocess the data, as time is represented by virtue of the structure of the neural network built by DynaMind.

Of the various neural network packages that I have examined and worked with over the past three years, DynaMind’s graphic interface and ease of use stand out. Its ability to graph the inputs, outputs and targets in a simple, easy-to-read display provides good visual representations of the network’s progress. In addition, I was pleased with the preliminary results of the networks that we were able to train using the DynaMind Developer.

Lou Mendelsohn is president of Market Technologies Corporation, Wesley Chapel, FL, a research, development and consulting firm involved in the application of artificial intelligence to financial market analysis.

Reprinted from Technical Analysis of
Stocks & Commodities magazine. (C) 1993 Technical Analysis, Inc.,
4757 California Avenue S.W., Seattle, WA 98116-4499, (800) 832-4642.