Other Topics

Clipping

If a neuron uses clipping, then if its activation exceeds its upper or lower bound, the activation is set to the upper or lower bound that it exceeds. Similarly with weights and their strength.

 

Clamping

In general, a clamped synapse or node will not change over time; it is "clamped" to its current value. There are two ways to clamp neurons and synapses in Simbrain.

(1) Set the type of a synapse to clamped synapse or the type of a neuron to clamped neuron. This allows specific subsets of synapses or neurons to be clamped.

(2) In the Toolbar or Edit menu, select "Clamp weights" or "Clamp neurons." These cause all weights or synapses in a simulation to be clamped. This is useful when, for example, training a Hopfield network. You clamp the neurons at a value, then run the network to train the weights. Then you unclamp the neurons and clamp the weights, to test the network's memory. This method applies to the network globally, in that the neurons or synapses are not individually set to be clamped. The advantage of this method is that it allows you to temporarily turn off learning to focus on activation dynamics, or temporarily block activation change in order to focus on weight dynamics.

Note that some subnetworks override clamping, in particular when they manually train their weights. Also note that the activation of a clamped neuron can be changed by using the up and down keys, and similarly with clamped weights.

 

Weighted Input

Each input to a node can have different amounts of influence on that node. The amount of influence that an input has is called the "weight" of that input. We represent the weight of the ith input by wi and the activation level from the ith node by ai. The weighted input is then:

Note that a sensory input term I is also added to the weighted input if the node has a sensory coupling attached to it.

When a source neuron is a spiking neuron (see spiking networks) the weight times activation term for that neuron is replaced by something more complex. See spiking networks.

"Weighted input" is also referred to as "net input" in much of the connectionist literature. To make a neuron whose acitvation value equals its weighted input, use a linear neuron with slope = 1 and bias = 0.