Results

Several experiments were conducted with various architectures of MLP & feed-forward networks. After a number of experiments, the number of hidden layers in both the cases was fixed at one. Each hidden layer consisted of four processing units. Theoretically, it has been shown that MLPs with a wide variety of continuous hidden-layer activation functions, one hidden layer with an arbitrarily large number of units suffices for the "universal approximation" property (Hornik, 1993; Hornik, Stinchcombe, & White,1989).

Fit of the parameters, that is fiQ, fi2, X during the testing phase of MLP are depicted in Figure 2(a)-(d). Similar diagrams can be shown for the feed-forward network, though they are purposefully avoided. Table 1 shows the average error in prediction of fiQ, fi2 and X with actual values modeled by Nelson-Siegel method on the test-data set. However, what matters most is the error generated in forecasting the bond price calculated using

Figure 2(a). Variation between actual and neural network values of fiQ on out-of-sample data (MLP)

Actual Desired Output and Neural-network OutputibetaQ

Figure 2(a). Variation between actual and neural network values of fiQ on out-of-sample data (MLP)

Actual Desired Output and Neural-network OutputibetaQ

1 26 51 76 101 126 151 176 201 226 251 Exemplar

Figure 2(b). Variation between actual and neural network values of fil on out-of-sample data (MLP)

Actual Desired Output and Neural-network Output: betal

Actual Desired Output and Neural-network Output: betal

Exemplar

Figure 2(c). Variation between actual and neural network values of fi2 on out of sample data (MLP)

Actual Desired Output and Neural Network Output:beta2

Actual Desired Output and Neural Network Output:beta2

Figure 2(d). Variation between actual and neural network values of X on out-of-sample data (MLP)

Actual Desired Output and Network-network Output: X

16 ■ 14 ■ 12 -1 101 H 6 4 2

tiMMf)

W

1 26 51 76 101 126 151 17

201 226 251

Exemplar

Table 1. Average percentage error in prediction offi0, and Xusing MLP and feedforward architectures

Average Percentage Error

Parameters

(MLP)

(Feed forward)

ß0

7.09128

6.851593

ß

6.00752

5.86612

ß2

13.59411

13.05343

1

16.85239

16.91081

Table 2. Mean-square error in prediction of bond price

Multilayer Perception

Feed-forward Network

7.377

4.094

Table 3. Average percentage error in prediction of bond price

Multilayer Perceptron

Feed-forward Network

0.00709

-0.00023

the forecasted values of the Nelson-Siegel parameters of the yield curve. So, in some sense, the comparison of the forecasted Nelson-Siegel parameters with the modeled Nelson-Siegel parameters on the test data is of only of pseudoimportance.

Tables 2 and 3 give the MSE and the average percentage error in prediction of bond prices for both MLP and feed-forward networks. Comparative performance of the feed-forward networks is better than the MLP. The model based on a feed-forward network is better able to capture the diverse facets of the term structure than the model based on multilayer perceptron.

The models where we make use of the Nelson-Siegel method along with the neural network models produce significantly fewer pricing errors and seem to forecast the yield and bond price accurately. Percentage error in prediction is less than 1% in both the network models, which indicates a good forecast.

From Figure 2(d) it can be observed that the fit for the parameter X on the out-of-the-test samples is not quite good. However, the low values for the errors for predicting bond prices using the forecasted parameters suggest that X does not contribute much toward the forecasting of the yield curve.

0 0

Post a comment