- Tytuł:
- An optimized parallel implementation of non-iteratively trained recurrent neural networks
- Autorzy:
-
El Zini, Julia
Rizk, Yara
Awad, Mariette - Powiązania:
- https://bibliotekanauki.pl/articles/2031147.pdf
- Data publikacji:
- 2021
- Wydawca:
- Społeczna Akademia Nauk w Łodzi. Polskie Towarzystwo Sieci Neuronowych
- Tematy:
-
GPU implementation
parallelization
Recurrent Neural Network
RNN
Long-short Term Memory
LSTM
Gated Recurrent Unit
GRU
Extreme Learning Machines
ELM
non-iterative training - Opis:
- Recurrent neural networks (RNN) have been successfully applied to various sequential decision-making tasks, natural language processing applications, and time-series predictions. Such networks are usually trained through back-propagation through time (BPTT) which is prohibitively expensive, especially when the length of the time dependencies and the number of hidden neurons increase. To reduce the training time, extreme learning machines (ELMs) have been recently applied to RNN training, reaching a 99% speedup on some applications. Due to its non-iterative nature, ELM training, when parallelized, has the potential to reach higher speedups than BPTT. In this work, we present Opt-PR-ELM, an optimized parallel RNN training algorithm based on ELM that takes advantage of the GPU shared memory and of parallel QR factorization algorithms to efficiently reach optimal solutions. The theoretical analysis of the proposed algorithm is presented on six RNN architectures, including LSTM and GRU, and its performance is empirically tested on ten time-series prediction applications. Opt- PR-ELM is shown to reach up to 461 times speedup over its sequential counterpart and to require up to 20x less time to train than parallel BPTT. Such high speedups over new generation CPUs are extremely crucial in real-time applications and IoT environments.
- Źródło:
-
Journal of Artificial Intelligence and Soft Computing Research; 2021, 11, 1; 33-50
2083-2567
2449-6499 - Pojawia się w:
- Journal of Artificial Intelligence and Soft Computing Research
- Dostawca treści:
- Biblioteka Nauki