Experiment Setup and Results

We considered 7-year stock data for the Nasdaq-100 Index and 4-year for the NIFTY index. Our target was to develop efficient forecast models that could predict the index value of the following trading day based on the opening, closing, and maximum values on any given day. The training and test patterns for both indices (scaled values) are illustrated in Figure 1. We used the same training- and test-data sets to evaluate the different connectionist models. More details are reported in the following sections. Experiments were carried out on a Pentium IV, 2.8 GHz Machine with 512 MB RAM and the programs implemented in C/C++. Test data was presented to the trained connectionist models, and the output from the network compared with the actual index values in the time series.

The assessment of the prediction performance of the different connectionist paradigms were done by quantifying the prediction obtained on an independent data set. The root-mean-squared error (RMSE), maximum-absolute-percentage error (MAP), mean-absolute-percentage error (MAPE), and correlation coefficient (CC) were used to study the performance of the trained forecasting model for the test data.

MAP is defined as follows:

actual, i f

Ppredicted, i where P , . is the actual index value on day i and P .. . . is the forecast value of the actual, i J predicted, i index on that day. Similarly MAPE is given as:

N 1 1 Pactual, i where N represents the total number of days.

• FNT Algorithm

We used the instruction set S = {+2, +3, - ., +10, x0, xl, x2} modeling the Nasdaq-100 index and instruction set S = {+2, +3, - ., +10, x0, x1, x2, x3, x4} modeling the NIFTY index. We used the flexible activation function of Equation 4 for the hidden neurons. Training was terminated after 80 epochs on each dataset.

A feed-forward neural network with three input nodes and a single hidden layer consisting of 10 neurons was used for modeling the Nasdaq-100 index. A feed-forward neural network with five input nodes and a single hidden layer consisting of 10 neurons was used for modeling the NIFTY index. Training was terminated after 3000 epochs on each dataset.

A WNN with three input nodes and a single hidden layer consisting of 10 neurons was used for modeling the Nasdaq-100 index. A WNN with five input nodes and a single hidden layer consisting of10 neurons was used for modeling the NIFTY index. Training was terminated after 4000 epochs on each dataset.

A LLWNN with three input nodes and a hidden layer consisting of five neurons for modeling Nasdaq-100 index. A LLWNN with five input nodes and a single hidden layer consisting of five neurons for modeling NIFTY index. Training was terminated after 4500 epochs on each dataset.

Figure 4. Test results showing the performance of the different methods for modeling the Nasdaq-100 index

Figure 4. Test results showing the performance of the different methods for modeling the Nasdaq-100 index

Figure 5. Test results showing the performance of the different methods for modeling the NIFTY index
Table 1. Empirical comparison of RMSE results for four learning methods

FNT 1 NN-PSO 1 WNN-PSO | LLWNN-PSO

Trainin

g results

Nasdaq-100

0.02598

0.02573

0.02586

0.02551

NIFTY

0.01847

0.01729

0.01829

0.01691

Testing

results

Nasdaq-100

0.01882

0.01864

0.01789

0.01968

NIFTY

0.01428

0.01326

0.01426

0.01564

Table 2. Statistical analysis of four learning methods (test data)

FNT

NN-PSO

WNN-PSO

LLWNN-PSO

Nasdaq-100

CC

0.997579

0.997704

0.997721

0.997623

MAP

98.107

141.363

152.754

230.514

MAPE

6.205

6.528

6.570

6.952

NIFTY

CC

0.996298

0.997079

0.996399

0.996291

MAP

39.987

27.257

39.671

30.814

MAPE

3.328

3.092

3.408

4.146

• Performance and Results Achieved

Table 1 summarizes the training and test results achieved for the two stock indices using the four different approaches. The statistical analysis of the four learning methods is depicted in Table 2. Figures 4 and 5 depict the test results for the 1-day-ahead prediction of Nasdaq-100 index and NIFTY index respectively.

0 0

Post a comment