- Tytuł:
- Theory II: Deep learning and optimization
- Autorzy:
-
Poggio, T.
Liao, Q. - Powiązania:
- https://bibliotekanauki.pl/articles/201787.pdf
- Data publikacji:
- 2018
- Wydawca:
- Polska Akademia Nauk. Czytelnia Czasopism PAN
- Tematy:
-
deep learning
convolutional neural networks
loss surface
optimization
uczenie głębokie
sieć neuronowa
optymalizacja - Opis:
- The landscape of the empirical risk of overparametrized deep convolutional neural networks (DCNNs) is characterized with a mix of theory and experiments. In part A we show the existence of a large number of global minimizers with zero empirical error (modulo inconsistent equations). The argument which relies on the use of Bezout theorem is rigorous when the RELUs are replaced by a polynomial nonlinearity. We show with simulations that the corresponding polynomial network is indistinguishable from the RELU network. According to Bezout theorem, the global minimizers are degenerate unlike the local minima which in general should be non-degenerate. Further we experimentally analyzed and visualized the landscape of empirical risk of DCNNs on CIFAR-10 dataset. Based on above theoretical and experimental observations, we propose a simple model of the landscape of empirical risk. In part B, we characterize the optimization properties of stochastic gradient descent applied to deep networks. The main claim here consists of theoretical and experimental evidence for the following property of SGD: SGD concentrates in probability – like the classical Langevin equation – on large volume, ”flat” minima, selecting with high probability degenerate minimizers which are typically global minimizers.
- Źródło:
-
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2018, 66, 6; 775-787
0239-7528 - Pojawia się w:
- Bulletin of the Polish Academy of Sciences. Technical Sciences
- Dostawca treści:
- Biblioteka Nauki