- Tytuł:
- Convergence Analysis of Multilayer Feedforward Networks Trained with Penalty Terms: A review
- Autorzy:
-
Wang, J.
Yang, G.
Liu, S.
Zurada, J. M. - Powiązania:
- https://bibliotekanauki.pl/articles/108639.pdf
- Data publikacji:
- 2015
- Wydawca:
- Społeczna Akademia Nauk w Łodzi
- Tematy:
-
Gradient
feedforward neural networks
generalization
penalty
convergence
pruning algorithms - Opis:
- Gradient descent method is one of the popular methods to train feedforward neural networks. Batch and incremental modes are the two most common methods to practically implement the gradient-based training for such networks. Furthermore, since generalization is an important property and quality criterion of a trained network, pruning algorithms with the addition of regularization terms have been widely used as an efficient way to achieve good generalization. In this paper, we review the convergence property and other performance aspects of recently researched training approaches based on different penalization terms. In addition, we show the smoothing approximation tricks when the penalty term is non-differentiable at origin.
- Źródło:
-
Journal of Applied Computer Science Methods; 2015, 7 No. 2; 89-103
1689-9636 - Pojawia się w:
- Journal of Applied Computer Science Methods
- Dostawca treści:
- Biblioteka Nauki