Deep Neural Networks (DNN) are nothing but neural networks with many hidden layers. DNNs are becoming popular in automatic speech recognition tasks which combines a
good acoustic with a language model. Standard feedforward neural networks cannot handle speech data well since they do not have a way to feed information from a later layer
back to an earlier layer. Thus, Recurrent Neural Networks (RNNs) have been introduced
to take temporal dependencies into account. However, the shortcoming of RNNs is that
long-term dependencies due to the vanishing/exploding gradient problem cannot be handled. Therefore, Long Short-Term Memory (LSTM) networks were introduced, which are
a special case of RNNs, that takes long-term dependencies in a speech in addition to shortterm dependencies into account. Similarily, GRU (Gated Recurrent Unit) networks are an
improvement of LSTM networks also taking long-term dependencies into consideration.
Thus, in this paper, we evaluate RNN, LSTM, and GRU to compare their performances
on a reduced TED-LIUM speech data set. The results show that LSTM achieves the best
word error rates, however, the GRU optimization is faster while achieving word error rates
close to LSTM.
Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies
Informacja
SZANOWNI CZYTELNICY!
UPRZEJMIE INFORMUJEMY, ŻE BIBLIOTEKA FUNKCJONUJE W NASTĘPUJĄCYCH GODZINACH:
Wypożyczalnia i Czytelnia Główna: poniedziałek – piątek od 9.00 do 19.00