IMSL Statistics Reference Guide > Data Mining > MLFF_NETWORK Function (PV-WAVE Advantage)
  

MLFF_NETWORK Function (PV-WAVE Advantage)
Links and modifies a multilayered feedforward neural network.
Usage
result = MLFF_NETWORK(network)
Input Parameters
network—A structure containing a multilayered feed forward network.
Returned Value
result—A structure containing the modified multilayered feed forward network.
Input Keywords
Activation_fcn_layer_id—A scalar integer between 1 and the number of layers in the network, defining the layer number for which to set node activation functions. If one hidden layer has been created, 1 will indicate the hidden layer and 2 will indicate the output layer. If there are no hidden layers, 1 indicates the output layer. Must be used in conjunction with Activation_fcn_values.
Activation_fcn_values—An integer array of length n_perceptrons in layer_id, where n_perceptrons is the number of perceptrons in layer number layer_id. The ith value of the array contains the activation function for the ith perceptron in that layer. Valid activation function values are between 0 and 3 and correspond to:
*0—Linear
*1—Logistic
*2—Hyperbolic-tangent
*3—Squash
By default, output layer nodes use a Linear activation function and all hidden layers nodes use a Logistic activation function. Must be used in conjunction with Activation_fcn_layer_id.
Bias_layer_id—A scalar integer between 1 and the number of layers in the network, defining the layer number for which to set node bias values. If one hidden layer has been created, 1 will indicate the hidden layer and 2 will indicate the output layer. If there are no hidden layers, 1 indicates the output layer. Must be used in conjunction with Bias_values.
Bias_values—A float array of length N_perceptrons in layer_id, where N_perceptrons is the number of perceptrons in layer number layer_id. The ith value of the array should contain the bias value associated with the activation function for the ith perceptron in that layer. Must be used in conjunction with Bias_layer_id.
Create_hidden_layer—A scalar integer defining the number of perceptrons to add to the network as a new hidden layer. To create more hidden layers MLFF_NETWORK must be called multiple times with this keyword set, once for each hidden layer desired. By default, the network has no hidden layers.
Link_all—If present and nonzero, connects all nodes in a layer to each node in the next layer, for all layers in the network. Cannot be used with either Link_layer or Link_node.
Link_layer—A two-element integer array, (from, to). Creates a link between all nodes in layer from to all nodes in layer to. Layers are numbered starting at zero with the input layer, then the hidden layers in the order they are created, and finally the output layer. Cannot be used with either Link_all or Link_node.
Link_node—A two-element integer array, (from, to). Creates a Link between node from to node to. Nodes are numbered starting at zero with the input nodes, then the hidden layer perceptrons, and finally the output perceptrons. Cannot be used with either Link_all or Link_layer.
Remove_link—A two-element integer array, (from, to). Removes the link between node from and node to. Nodes are numbered starting at zero with the input nodes, then the hidden layer perceptrons, and finally output perceptrons.
Weights—This keyword has been deprecated starting with version 10.0 of PV-WAVE.
Output Keywords
N_links—Returns the total number of links in the network.
Discussion
A multilayerd feedforward network contains an input layer, an output layer and zero or more hidden layers. The input and output layers are created by the MLFF_NETWORK_INIT Function (PV-WAVE Advantage), where N_inputs specifies the number of inputs in the input layer and N_outputs specifies the number of perceptrons in the output layer. The hidden layers are created by one or more calls to MLFF_NETWORK with the Create_hidden_layer keyword specifying the number of perceptrons in the hidden layer.
The network also contains links or connections between nodes. Links are created by using one of the three keywords Link_all, Link_layer, or Link_node. The most useful is the Link_all, which connects every node in each layer to every node in the next layer. A feed forward network is a network in which links are only allowed from one layer to a following layer.
Each link has a weight and gradient value. Each perceptron node has a bias value. When the network is trained, the weight and bias values are used as initial guesses. After the network is trained using the MLFF_NETWORK_TRAINER Function (PV-WAVE Advantage), the weight, gradient and bias values are updated in the NN_Network structure defined below.
Each perceptron has an activation function g, and a bias μ. The value of the percepton is given by g(Z), where g is the activation function and z is the potential calculated using:
where xi are the values of nodes input to this perceptron with weights wi.
All information for the network is stored in the structure called NN_Network. This structure encodes the neural network that is created by MLFF_NETWORK_INIT, modified by MLFF_NETWORK, and trained by MLFF_NETWORK_TRAINER. The following tables describe this structure:
 
Table 14-13: NN_Network
Structure Tag Name
Description
n_layers
A scalar integer defining the number of layers in the network.
layers
A list with n_layers elements containing structures of type NN_Layer, where each structure encodes information about a different layer in the network. The input layer is layers(0) and the output layer is layers(n_layers-1).
n_links
A scalar integer defining the number of links (weights) in the network.
next_link
A scalar integer corresponding to the index of the last link in the network, plus 1.
links
A list with n_links elements containing structures of type NN_Link, where each structure encodes information about a different link in the network.
n_nodes
A scalar integer defining the number of nodes in the network.
nodes
A list with n_nodes elements containing structures of type NN_Node, where each structure encodes information about a different node in the network.
The NN_Layer, NN_Link, and NN_Node structures are defined as follows.
 
Table 14-14: NN_Layer
Structure Tag Name
Description
n_nodes
A scalar integer defining the number of nodes in the layer.
nodes
An array of integers defining the index of each node in the layer.
 
Table 14-15: NN_Link
Structure Tag Name
Description
weight
A scalar float defining the numerical weight associated with the link.
gradient
A scalar float defining the gradient (first derivative) associated with the weight.
from_node
A scalar integer defining the index of the node that the link is coming from.
to_node
A scalar integer defining the index of the node that the link is going to.
 
Table 14-16: NN_Node
Structure Tag Name
Description
activation_fcn_layer_id
A scalar integer defining the layer in which the node exists.
n_inLinks
A scalar integer defining the number of links coming into the node.
n_outLinks
A scalar integer defining the number of links going out of the node.
inLinks
An array of integers defining the indices of the links coming into the node.
outLinks
An array of integers defining the indices of the links going out of the node.
gradient
A scalar float defining the gradient (first derivative) of the bias value associated with the node’s activation function.
bias
A scalar float defining the bias value associated with the node’s activation function. The bias values are learned in the same way as weights are learned.
ActivationFcn
A scalar integer 0-3 encoding the activation function type for the node, where:
*0: Linear
*1: Logistic
*2: Hyperbolic-tangent
*3: Squash
Refer to Activation Functions and Their Derivatives in Data Mining in the PV‑WAVE IMSL Statistics Reference.
In particular, if network is a structure of type NN_Network , then:
 
Table 14-17: Structure Members and Their Descriptions
Structure member
Description
network.n_layers
Number of layers in network. Layers are numbered starting at 0 for the input layer.
network.n_nodes
Total number of nodes in network, including the input attributes.
network.n_links
Total number of links or connections between input attributes and perceptrons and between perceptrons from layer to layer.
network.layers(0)
Input layer with n_inputs attributes.
network.layers(network.n_layers-1)
Output layer with n_outputs perceptrons.
network.layers(0).n_nodes
n_inputs (number of input attributes).
network.layers(ffnet.n_layers-1).n_nodes
n_outputs (number of output perceptrons).
network.layers(1).n_nodes
Number of output perceptrons in first hidden layer.
network.n_links(i).weight
Initial weight for the ith link in network. After the training has completed the structure menber contains the weight used for forecasting.
network.n_nodes(i).bias
Initial bias value for the ith node. After the training has completed the bias value is updated.
Nodes are numbered starting at zero with the input nodes, then the hidden layer perceptrons and finally the output perceptrons.
Layers are numbered starting at zero with the input layer, then the hidden layers and finally the output layer. If there are zero hidden layers, the output layer is layer one.
Example
For examples, refer to the Example section of the MLFF_NETWORK_TRAINER Function (PV-WAVE Advantage) and MLFF_NETWORK_FORECAST Function (PV-WAVE Advantage).

Version 2017.0
Copyright © 2017, Rogue Wave Software, Inc. All Rights Reserved.