In neural networks literature, there is a strong interest in identifying and defining activation functions which can improve neural network performance. In recent years there has been a renovated interest in the scientific community in investigating activation functions which can be trained during the learning process, usually referred to as trainable, learnable or adaptable activation functions. They appear to lead to better network performance. Diverse and heterogeneous models of trainable activation function have been proposed in the literature. In this paper, we present a survey of these models. Starting from a discussion on the use of the term “activation function” in literature, we propose a taxonomy of trainable activation functions, highlight common and distinctive proprieties of recent and past models, and discuss main advantages and limitations of this type of approach. We show that many of the proposed approaches are equivalent to adding neuron layers which use fixed (non-trainable) activation functions and some simple local rule that constrains the corresponding weight layers.
A survey on modern trainable activation functions / Apicella, A.; Donnarumma, F.; Isgro', F.; Prevete, R.. - In: NEURAL NETWORKS. - ISSN 0893-6080. - 138:(2021), pp. 14-32. [10.1016/j.neunet.2021.01.026]
A survey on modern trainable activation functions
Apicella A.
Membro del Collaboration Group
;Isgro' F.Membro del Collaboration Group
;Prevete R.Membro del Collaboration Group
2021
Abstract
In neural networks literature, there is a strong interest in identifying and defining activation functions which can improve neural network performance. In recent years there has been a renovated interest in the scientific community in investigating activation functions which can be trained during the learning process, usually referred to as trainable, learnable or adaptable activation functions. They appear to lead to better network performance. Diverse and heterogeneous models of trainable activation function have been proposed in the literature. In this paper, we present a survey of these models. Starting from a discussion on the use of the term “activation function” in literature, we propose a taxonomy of trainable activation functions, highlight common and distinctive proprieties of recent and past models, and discuss main advantages and limitations of this type of approach. We show that many of the proposed approaches are equivalent to adding neuron layers which use fixed (non-trainable) activation functions and some simple local rule that constrains the corresponding weight layers.File | Dimensione | Formato | |
---|---|---|---|
1-s2.0-S0893608021000344-main.pdf
solo utenti autorizzati
Descrizione: Articolo principale
Tipologia:
Versione Editoriale (PDF)
Licenza:
Accesso privato/ristretto
Dimensione
1.85 MB
Formato
Adobe PDF
|
1.85 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.