The base UnitSpec defines a standardized way of splitting up the computations that take place on a unit. This allows there to be a single function at the level of the Network which iterates through all of the layers and they iterate through all of their units and call these functions. This makes writing process code easier, and provides a conceptual skeleton on which to implement different algorithms. Note that the unit has simple "stub" versions of these functions which simply call the corresponding one on the spec. This also simplifies programming.
InitState(Unit* u)
InitWtDelta(Unit* u)
InitWtState(Unit* u)
Compute_Net(Unit* u)
net
field to the summed product of the sending unit's
activation value times the weight.
Send_Net(Unit* u)
net
field of the unit's it sends to. This way of
computing net input is useful when not all units send activation (i.e.,
if there is a threshold for sending activation or "firing"). A given
algorithm will either use Compute_Net
or Send_Net
, but not
both.
Compute_Act(Unit* u)
Compute_dWt(Unit* u)
Compute_dWt
function on them, which should compute the
amount that the weights should be changed based on the current state of
the network, and a learning rule which translates this into weight
changes. It should always add an increment the current weight change
value, so that learning can occur either pattern-by-pattern ("online"
mode) or over multiple patterns ("batch" mode).
UpdateWeights(Unit* u)
Compute_dWt
to the weights, applying learning rates,
etc. This function should always reset the weight change variable after
it updates the weights, so that Compute_dWt
can always increment
weight changes. Note that this function is called by the
EpochProcess, which decides whether to perform online or batch-mode
learning (see section 12.4.3 Iterating over Trials: EpochProcess).