Peer Reviewed Journal via three different mandatory reviewing processes, since 2006, and, from September 2020, a fourth mandatory peer-editing has been added.
Many Artificial Neural Networks design
algorithms or learning methods imply the minimization of an
error objective function. During learning, weight values are
updated following a strategy that tends to minimize the final
mean error in the Network performance. Weight values are
classically seen as a representation of the synaptic weights in
biological neurons and their ability to change its value could be
interpreted as artificial plasticity inspired by this biological
property of neurons. In such a way, metaplasticity is interpreted
in this paper as the ability to change the efficiency of artificial
plasticity giving more relevance to weight updating of less
frequent activations and resting relevance to frequent ones.
Modeling this interpretation in the training phase, the
hypothesis of an improved training is tested in the Multilayer
Perceptron with Backpropagation case. The results show a
much more efficient training maintaining the Artificial Neural
Network performance.