Library

net: Neural Networks

The module contains the basic network architectures

Network Type Function Count of layers Support train fcn Error fcn
Single-layer perceptron newp 1 train_delta SSE
Multi-layer perceptron newff >=1 train_gd, train_gdm, train_gda, train_gdx, train_rprop, train_bfgs*, train_cg SSE
Competitive layer newc 1 train_wta, train_cwta* SAE
LVQ newlvq 2 train_lvq MSE
Elman newelm >=1 train_gdx MSE
Hopield newhop 1 None None
Hemming newhem 2 None None

Note

* - default function

neurolab.net.newc(minmax, cn)[source]

Create competitive layer (Kohonen network)

Parameters:
minmax: list of list, the outer list is the number of input neurons,

inner lists must contain 2 elements: min and max

Range of input value

cn: int, number of output neurons

Number of neurons

Returns:

net: Net

Example:
>>> # create network with 2 inputs and 10 neurons
>>> net = newc([[-1, 1], [-1, 1]], 10)
neurolab.net.newelm(minmax, size, transf=None)[source]

Create a Elman recurrent network

Parameters:
minmax: list of list, the outer list is the number of input neurons,

inner lists must contain 2 elements: min and max

Range of input value

size: the length of list equal to the number of layers except input layer,

the element of the list is the neuron number for corresponding layer

Contains the number of neurons for each layer

Returns:

net: Net

Example:
>>> # 1 input, input range is [-1, 1], 1 output neuron, 1 layer including output layer
>>> net = newelm([[-1, 1]], [1], [trans.PureLin()])
>>> net.layers[0].np['w'][:] = 1 # set weight for all input neurons to 1
>>> net.layers[0].np['b'][:] = 0 # set bias for all input neurons to 0
>>> net.sim([[1], [1] ,[1], [3]])
array([[ 1.],
       [ 2.],
       [ 3.],
       [ 6.]])
neurolab.net.newff(minmax, size, transf=None)[source]

Create multilayer perceptron

Parameters:
minmax: list of list, the outer list is the number of input neurons,

inner lists must contain 2 elements: min and max

Range of input value

size: the length of list equal to the number of layers except input layer,

the element of the list is the neuron number for corresponding layer

Contains the number of neurons for each layer

transf: list (default TanSig)

List of activation function for each layer

Returns:

net: Net

Example:
>>> # create neural net with 2 inputs
        >>> # input range for each input is [-0.5, 0.5]
        >>> # 3 neurons for hidden layer, 1 neuron for output
        >>> # 2 layers including hidden layer and output layer
>>> net = newff([[-0.5, 0.5], [-0.5, 0.5]], [3, 1])
>>> net.ci
2
>>> net.co
1
>>> len(net.layers)
2
neurolab.net.newhem(target, transf=None, max_iter=10, delta=0)[source]

Create a Hemming recurrent network with 2 layers

Parameters:
target: array like (l x net.co)

train target patterns

transf: func (default SatLinPrm(0.1, 0, 10))

Activation function of input layer

max_init: int (default 10)

Maximum of recurrent iterations

delta: float (default 0)

Minimum dereference between 2 outputs for stop recurrent cycle

Returns:

net: Net

Example:
>>> net = newhop([[-1, -1, -1], [1, -1, 1]])
>>> output = net.sim([[-1, 1, -1], [1, -1, 1]])
neurolab.net.newhop(target, transf=None, max_init=10, delta=0)[source]

Create a Hopfield recurrent network

Parameters:
target: array like (l x net.co)

train target patterns

transf: func (default HardLims)

Activation function

max_init: int (default 10)

Maximum of recurrent iterations

delta: float (default 0)

Minimum difference between 2 outputs for stop recurrent cycle

Returns:

net: Net

Example:
>>> net = newhem([[-1, -1, -1], [1, -1, 1]])
>>> output = net.sim([[-1, 1, -1], [1, -1, 1]])
neurolab.net.newlvq(minmax, cn0, pc)[source]

Create a learning vector quantization (LVQ) network

Parameters:
minmax: list of list, the outer list is the number of input neurons,

inner lists must contain 2 elements: min and max

Range of input value

cn0: int

Number of neurons in input layer

pc: list

List of percent, sum(pc) == 1

Returns:

net: Net

Example:
>>> # create network with 2 inputs,
>>> # 2 layers and 10 neurons in each layer
>>> net = newlvq([[-1, 1], [-1, 1]], 10, [0.6, 0.4])
neurolab.net.newp(minmax, cn, transf=<neurolab.trans.HardLim instance at 0x2ae784512320>)[source]

Create one layer perceptron

Parameters:
minmax: list of list, the outer list is the number of input neurons,

inner lists must contain 2 elements: min and max

Range of input value

cn: int, number of output neurons

Number of neurons

transf: func (default HardLim)

Activation function

Returns:

net: Net

Example:
>>> # create network with 2 inputs and 10 neurons
>>> net = newp([[-1, 1], [-1, 1]], 10)

train: Train Algorithms

Train algorithms based gradients algorithms

neurolab.train.train_gd()

Gradient descent backpropogation

Support networks:
 

newff (multi-layers perceptron)

Parameters:
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

lr: float (defaults 0.01)

learning rate

adapt bool (default False)

type of learning

rr float (defaults 0.0)

Regularization ratio Must be between {0, 1}

neurolab.train.train_gdm()

Gradient descent with momentum backpropagation

Support networks:
 

newff (multi-layers perceptron)

Parameters:
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

lr: float (defaults 0.01)

learning rate

adapt bool (default False)

type of learning

mc: float (default 0.9)

Momentum constant

rr float (defaults 0.0)

Regularization ratio Must be between {0, 1}

neurolab.train.train_gda()

Gradient descent with adaptive learning rate

Support networks:
 

newff (multi-layers perceptron)

Parameters:
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

lr: float (defaults 0.01)

learning rate

adapt: bool (default False)

type of learning

lr_inc: float (> 1, default 1.05)

Ratio to increase learning rate

lr_dec: float (< 1, default 0.7)

Ratio to decrease learning rate

max_perf_inc:float (> 1, default 1.04)

Maximum performance increase

rr float (defaults 0.0)

Regularization ratio Must be between {0, 1}

neurolab.train.train_gdx()

Gradient descent with momentum backpropagation and adaptive lr

Support networks:
 

newff (multi-layers perceptron)

Рarameters:
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

lr: float (defaults 0.01)

learning rate

adapt: bool (default False)

type of learning

lr_inc: float (default 1.05)

Ratio to increase learning rate

lr_dec: float (default 0.7)

Ratio to decrease learning rate

max_perf_inc:float (default 1.04)

Maximum performance increase

mc: float (default 0.9)

Momentum constant

rr float (defaults 0.0)

Regularization ratio Must be between {0, 1}

neurolab.train.train_rprop()

Resilient Backpropagation

Support networks:
 

newff (multi-layers perceptron)

Parameters:
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

lr: float (defaults 0.07)

learning rate (init rate)

adapt bool (default False)

type of learning

rate_dec: float (default 0.5)

Decrement to weight change

rate_inc: float (default 1.2)

Increment to weight change

rate_min: float (default 1e-9)

Minimum performance gradient

rate_max: float (default 50)

Maximum weight change

Train algorithms based on Winner Take All - rule

neurolab.train.train_wta()

Winner Take All algorithm

Support networks:
 

newc (Kohonen layer)

Parameters:
input: array like (l x net.ci)

train input patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

neurolab.train.train_cwta()

Conscience Winner Take All algorithm

Support networks:
 

newc (Kohonen layer)

Parameters:
input: array like (l x net.ci)

train input patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

Train algorithms based on spipy.optimize

neurolab.train.train_bfgs()

Broyden–Fletcher–Goldfarb–Shanno (BFGS) method Using scipy.optimize.fmin_bfgs

Support networks:
 

newff (multi-layers perceptron)

Parameters:
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

rr float (defaults 0.0)

Regularization ratio Must be between {0, 1}

neurolab.train.train_cg()

Newton-CG method Using scipy.optimize.fmin_ncg

Support networks:
 

newff (multi-layers perceptron)

Parameters:
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

rr float (defaults 0.0)

Regularization ratio Must be between {0, 1}

neurolab.train.train_ncg()

Conjugate gradient algorithm Using scipy.optimize.fmin_ncg

Support networks:
 

newff (multi-layers perceptron)

Parameters:
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

rr float (defaults 0.0)

Regularization ratio Must be between {0, 1}

Train algorithms for LVQ networks

neurolab.train.train_lvq()

LVQ1 train function

Support networks:
 

newlvq

Parameters:
input: array like (l x net.ci)

train input patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

lr: float (defaults 0.01)

learning rate

adapt bool (default False)

type of learning

Delta rule

neurolab.train.train_delta()

Train with Delta rule

Support networks:
 

newp (one-layer perceptron)

Parameters:
input: array like (l x net.ci)

train input patterns

target: array like (l x net.co)

train target patterns

epochs: int (default 500)

Number of train epochs

show: int (default 100)

Print period

goal: float (default 0.01)

The goal of train

lr: float (default 0.01)

learning rate

error: Error functions

Train error functions with derivatives

Example:
>>> msef = MSE()
>>> x = np.array([[1.0, 0.0], [2.0, 0.0]])
>>> msef(x, 0)
1.25
>>> # calc derivative:
>>> msef.deriv(x[0], 0)
array([ 1.,  0.])
class neurolab.error.CEE[source]

Cross-entropy error function. For use when targets in {0,1}

C = -sum( t * log(o) + (1 - t) * log(1 - o))

Thanks kwecht https://github.com/kwecht :Parameters:

target: ndarray
target values for network
output: ndarray
simulated output of network
Returns:
v: float

Error value

deriv(target, output)[source]

Derivative of CEE error function

dC/dy = - t/o + (1 - t) / (1 - o)

Parameters:
target: ndarray

target values for network

output: ndarray

simulated output of network

Returns:
d: ndarray

Derivative: dE/d_out

class neurolab.error.MAE[source]

Mean absolute error function

Parameters:
target: ndarray

target values for network

output: ndarray

simulated output of network

Returns:
v: float

Error value

deriv(target, output)[source]

Derivative of SAE error function

Parameters:
target: ndarray

target values for network

output: ndarray

simulated output of network

Returns:
d: ndarray

Derivative: dE/d_out

class neurolab.error.MSE[source]

Mean squared error function

Parameters:
target: ndarray

target values for network

output: ndarray

simulated output of network

Returns:
v: float

Error value

Example:
>>> f = MSE()
>>> x = np.array([[1.0, 0.0], [2.0, 0.0]])
>>> f(x, 0)
1.25
deriv(target, output)[source]

Derivative of MSE error function

Parameters:
target: ndarray

target values for network

output: ndarray

simulated output of network

Returns:
d: ndarray

Derivative: dE/d_out

Example:
>>> f = MSE()
>>> x = np.array([1.0, 0.0])
>>> # calc derivative:
>>> f.deriv(x, 0)
array([ 1.,  0.])
class neurolab.error.SAE[source]

Sum absolute error function

Parameters:
target: ndarray

target values for network

output: ndarray

simulated output of network

Returns:
v: float

Error value

deriv(target, output)[source]

Derivative of SAE error function

Parameters:
target: ndarray

target values for network

output: ndarray

simulated output of network

Returns:
d: ndarray

Derivative: dE/d_out

class neurolab.error.SSE[source]

Sum squared error function

Parameters:
target: ndarray

target values for network

output: ndarray

simulated output of network

Returns:
v: float

Error value

deriv(target, output)[source]

Derivative of SSE error function

Parameters:
target: ndarray

target values for network

output: ndarray

simulated output of network

Returns:
d: ndarray

Derivative: dE/d_out

trans: Transfer functions

Transfer function with derivatives

Example:
>>> import numpy as np
>>> f = TanSig()
>>> x = np.linspace(-5,5,100)
>>> y = f(x)
>>> df_on_dy = f.deriv(x, y) # calc derivative
>>> f.out_minmax    # list output range [min, max]
[-1, 1]
>>> f.inp_active    # list input active range [min, max]
[-2, 2]
class neurolab.trans.Competitive[source]

Competitive transfer function

Parameters:
x: ndarray

Input array

Returns:
y : ndarray

may take the following values: 0, 1 1 if is a minimal element of x, else 0

Example:
>>> f = Competitive()
>>> f([-5, -0.1, 0, 0.1, 100])
array([ 1.,  0.,  0.,  0.,  0.])
>>> f([-5, -0.1, 0, -6, 100])
array([ 0.,  0.,  0.,  1.,  0.])
class neurolab.trans.HardLim[source]

Hard limit transfer function

Parameters:
x: ndarray

Input array

Returns:
y : ndarray

may take the following values: 0, 1

Example:
>>> f = HardLim()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([ 0.,  0.,  0.,  1.,  1.])
deriv(x, y)[source]

Derivative of transfer function HardLim

class neurolab.trans.HardLims[source]

Symmetric hard limit transfer function

Parameters:
x: ndarray

Input array

Returns:
y : ndarray

may take the following values: -1, 1

Example:
>>> f = HardLims()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([-1., -1., -1.,  1.,  1.])
deriv(x, y)[source]

Derivative of transfer function HardLims

class neurolab.trans.LogSig[source]

Logarithmic sigmoid transfer function

Parameters:
x: ndarray

Input array

Returns:
y : ndarray

The corresponding logarithmic sigmoid values.

Example:
>>> f = LogSig()
>>> x = np.array([-np.Inf, 0.0, np.Inf])
>>> f(x).tolist()
[0.0, 0.5, 1.0]
deriv(x, y)[source]

Derivative of transfer function LogSig

class neurolab.trans.PureLin[source]

Linear transfer function

Parameters:
x: ndarray

Input array

Returns:
y : ndarray

copy of x

Example:
>>> import numpy as np
>>> f = PureLin()
>>> x = np.array([-100., 50., 10., 40.])
>>> f(x).tolist()
[-100.0, 50.0, 10.0, 40.0]
deriv(x, y)[source]

Derivative of transfer function PureLin

class neurolab.trans.SatLin[source]

Saturating linear transfer function

Parameters:
x: ndarray

Input array

Returns:
y : ndarray

0 if x < 0; x if 0 <= x <= 1; 1 if x >1

Example:
>>> f = SatLin()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([ 0. ,  0. ,  0. ,  0.1,  1. ])
deriv(x, y)[source]

Derivative of transfer function SatLin

class neurolab.trans.SatLinPrm(k=1, out_min=0, out_max=1)[source]

Linear transfer function with parametric output May use instead Satlin and Satlins

Init Parameters:
 
k: float default 1

output scaling

out_min: float default 0

minimum output

out_max: float default 1

maximum output

Parameters:
x: ndarray

Input array

Returns:
y : ndarray

with default values 0 if x < 0; x if 0 <= x <= 1; 1 if x >1

Example:
>>> f = SatLinPrm()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([ 0. ,  0. ,  0. ,  0.1,  1. ])
>>> f = SatLinPrm(1, -1, 1)
>>> f(x)
array([-1. , -0.1,  0. ,  0.1,  1. ])
deriv(x, y)[source]

Derivative of transfer function SatLin

class neurolab.trans.SatLins[source]

Symmetric saturating linear transfer function

Parameters:
x: ndarray

Input array

Returns:
y : ndarray

-1 if x < -1; x if -1 <= x <= 1; 1 if x >1

Example:
>>> f = SatLins()
>>> x = np.array([-5, -1, 0, 0.1, 100])
>>> f(x)
array([-1. , -1. ,  0. ,  0.1,  1. ])
deriv(x, y)[source]

Derivative of transfer function SatLins

class neurolab.trans.SoftMax[source]

Soft max transfer function

Parameters:
x: ndarray

Input array

Returns:
y : ndarray

range values [0, 1]

Example:
>>> from numpy import floor
>>> f = SoftMax()
>>> floor(f([0, 1, 0.5, -0.5]) * 10)
array([ 1.,  4.,  2.,  1.])
class neurolab.trans.TanSig[source]

Hyperbolic tangent sigmoid transfer function

Parameters:
x: ndarray

Input array

Returns:
y : ndarray

The corresponding hyperbolic tangent values.

Example:
>>> f = TanSig()
>>> f([-np.Inf, 0.0, np.Inf])
array([-1.,  0.,  1.])
deriv(x, y)[source]

Derivative of transfer function TanSig

init: Initializing functions

Functions of initialization layers

class neurolab.init.InitRand(minmax, init_prop)[source]

Initialize the specified properties of the layer random numbers within specified limits

Parameters:
layer: core.Layer object

Initialization layer

neurolab.init.init_rand(layer, min=-0.5, max=0.5, init_prop='w')[source]

Initialize the specified property of the layer random numbers within specified limits

Parameters:
layer: core.Layer object

Initialization layer

min: float (default -0.5)

minimum value after the initialization

max: float (default 0.5)

maximum value after the initialization

init_prop: str (default ‘w’)

name of initialized property, must be in layer.np

neurolab.init.init_zeros(layer)[source]

Set all layer properties of zero

Parameters:
layer: core.Layer object

Initialization layer

neurolab.init.initnw(layer)[source]

Nguyen-Widrow initialization function

Parameters:
layer: core.Layer object

Initialization layer

neurolab.init.initwb_lin(layer)[source]

Initialize weights and bias linspace betweene active space, not random values

This function need for tests

Parameters:
layer: core.Layer object

Initialization layer

neurolab.init.initwb_reg(layer)[source]

Initialize weights and bias in the range defined by the activation function (transf.inp_active)

Parameters:
layer: core.Layer object

Initialization layer

neurolab.init.midpoint(layer)[source]

Sets weight to the center of the input ranges

Parameters:
layer: core.Layer object

Initialization layer