Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "RNN-LSTM" wg kryterium: Wszystkie pola


Wyświetlanie 1-7 z 7
Tytuł:
Predicting banking stock prices using RNN, LSTM, and GRU approach
Autorzy:
Satria, Dias
Powiązania:
https://bibliotekanauki.pl/articles/30148273.pdf
Data publikacji:
2023
Wydawca:
Polskie Towarzystwo Promocji Wiedzy
Tematy:
GRU
Indonesia Stock Price Prediction
machine learning
Opis:
In recent years, the implementation of machine learning applications started to apply in other possible fields, such as economics, especially investment. But, many methods and modeling are used without knowing the most suitable one for predicting particular data. This study aims to find the most suitable model for predicting stock prices using statistical learning with Arima Box-Jenkins, RNN, LSTM, and GRU deep learning methods using stock price data for 4 (four) major banks in Indonesia, namely BRI, BNI, BCA, and Mandiri, from 2013 to 2022. The result showed that the ARIMA Box-Jenkins modeling is unsuitable for predicting BRI, BNI, BCA, and Bank Mandiri stock prices. In comparison, GRU presented the best performance in the case of predicting the stock prices of BRI, BNI, BCA, and Bank Mandiri. The limitation of this research was data type was only time series data. It limits our instrument to four statistical methode only.
Źródło:
Applied Computer Science; 2023, 19, 1; 82-94
1895-3735
2353-6977
Pojawia się w:
Applied Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A Deep-Learning-Based Bug Priority Prediction Using RNN-LSTM Neural Networks
Autorzy:
Bani-Salameh, Hani
Sallam, Mohammed
Al shboul, Bashar
Powiązania:
https://bibliotekanauki.pl/articles/1818480.pdf
Data publikacji:
2021
Wydawca:
Politechnika Wrocławska. Oficyna Wydawnicza Politechniki Wrocławskiej
Tematy:
assigning
priority
bug tracking systems
bug priority
bug severity
closed-source
data mining
machine learning
ML
deep learning
RNN-LSTM
SVM
KNN
Opis:
Context: Predicting the priority of bug reports is an important activity in software maintenance. Bug priority refers to the order in which a bug or defect should be resolved. A huge number of bug reports are submitted every day. Manual filtering of bug reports and assigning priority to each report is a heavy process, which requires time, resources, and expertise. In many cases mistakes happen when priority is assigned manually, which prevents the developers from finishing their tasks, fixing bugs, and improve the quality. Objective: Bugs are widespread and there is a noticeable increase in the number of bug reports that are submitted by the users and teams’ members with the presence of limited resources, which raises the fact that there is a need for a model that focuses on detecting the priority of bug reports, and allows developers to find the highest priority bug reports. This paper presents a model that focuses on predicting and assigning a priority level (high or low) for each bug report. Method: This model considers a set of factors (indicators) such as component name, summary, assignee, and reporter that possibly affect the priority level of a bug report. The factors are extracted as features from a dataset built using bug reports that are taken from closed-source projects stored in the JIRA bug tracking system, which are used then to train and test the framework. Also, this work presents a tool that helps developers to assign a priority level for the bug report automatically and based on the LSTM’s model prediction. Results: Our experiments consisted of applying a 5-layer deep learning RNN-LSTM neural network and comparing the results with Support Vector Machine (SVM) and K-nearest neighbors (KNN) to predict the priority of bug reports. The performance of the proposed RNN-LSTM model has been analyzed over the JIRA dataset with more than 2000 bug reports. The proposed model has been found 90% accurate in comparison with KNN (74%) and SVM (87%). On average, RNN-LSTM improves the F-measure by 3% compared to SVM and 15.2% compared to KNN. Conclusion: It concluded that LSTM predicts and assigns the priority of the bug more accurately and effectively than the other ML algorithms (KNN and SVM). LSTM significantly improves the average F-measure in comparison to the other classifiers. The study showed that LSTM reported the best performance results based on all performance measures (Accuracy = 0.908, AUC = 0.95, F-measure = 0.892).
Źródło:
e-Informatica Software Engineering Journal; 2021, 15, 1; 29--45
1897-7979
Pojawia się w:
e-Informatica Software Engineering Journal
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Performance evaluation of deep neural networks applied to speech recognition : RNN, LSTM and GRU
Autorzy:
Shewalkar, Apeksha
Powiązania:
https://bibliotekanauki.pl/articles/91735.pdf
Data publikacji:
2019
Wydawca:
Społeczna Akademia Nauk w Łodzi. Polskie Towarzystwo Sieci Neuronowych
Tematy:
spectrogram
connectionist temporal classification
TED-LIUM data set
Opis:
Deep Neural Networks (DNN) are nothing but neural networks with many hidden layers. DNNs are becoming popular in automatic speech recognition tasks which combines a good acoustic with a language model. Standard feedforward neural networks cannot handle speech data well since they do not have a way to feed information from a later layer back to an earlier layer. Thus, Recurrent Neural Networks (RNNs) have been introduced to take temporal dependencies into account. However, the shortcoming of RNNs is that long-term dependencies due to the vanishing/exploding gradient problem cannot be handled. Therefore, Long Short-Term Memory (LSTM) networks were introduced, which are a special case of RNNs, that takes long-term dependencies in a speech in addition to shortterm dependencies into account. Similarily, GRU (Gated Recurrent Unit) networks are an improvement of LSTM networks also taking long-term dependencies into consideration. Thus, in this paper, we evaluate RNN, LSTM, and GRU to compare their performances on a reduced TED-LIUM speech data set. The results show that LSTM achieves the best word error rates, however, the GRU optimization is faster while achieving word error rates close to LSTM.
Źródło:
Journal of Artificial Intelligence and Soft Computing Research; 2019, 9, 4; 235-245
2083-2567
2449-6499
Pojawia się w:
Journal of Artificial Intelligence and Soft Computing Research
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
An overview of deep learning techniques for short-term electricity load forecasting
Autorzy:
Adewuyi, Saheed
Aina, Segun
Uzunuigbe, Moses
Lawal, Aderonke
Oluwaranti, Adeniran
Powiązania:
https://bibliotekanauki.pl/articles/117932.pdf
Data publikacji:
2019
Wydawca:
Polskie Towarzystwo Promocji Wiedzy
Tematy:
Short-term Load Forecasting
Deep Learning Architectures
RNN
LSTM
CNN
SAE
prognozowanie obciążenia krótkoterminowego
architektura głębokiego uczenia
Opis:
This paper presents an overview of some Deep Learning (DL) techniques applicable to forecasting electricity consumptions, especially in the short-term horizon. The paper introduced key parts of four DL architectures including the RNN, LSTM, CNN and SAE, which are recently adopted in implementing Short-term (electricity) Load Forecasting problems. It further presented a model approach for solving such problems. The eventual implication of the study is to present an insightful direction about concepts of the DL methods for forecasting electricity loads in the short-term period, especially to a potential researcher in quest of solving similar problems.
Źródło:
Applied Computer Science; 2019, 15, 4; 75-92
1895-3735
Pojawia się w:
Applied Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Forecasting future values of time series using the lstm network on the example of currencies and WIG20 companies
Prognozowanie przyszłych wartości szeregów czasowych z wykorzystaniem sieci lstm na przykładzie kursów walut i spółek WIG20
Autorzy:
Mróz, Bartosz
Nowicki, Filip
Powiązania:
https://bibliotekanauki.pl/articles/2016294.pdf
Data publikacji:
2020
Wydawca:
Politechnika Bydgoska im. Jana i Jędrzeja Śniadeckich. Wydawnictwo PB
Tematy:
recurrent neural network
RNN
gated recurrent unit
GRU
long short-term memory
LSTM
rekurencyjna sieć neuronowa
blok rekurencyjny
pamięć długookresowa
Opis:
The article presents a comparison of the RNN, GRU and LSTM networks in predicting future values of time series on the example of currencies and listed companies. The stages of creating an application which is a implementation of the analyzed issue were also shown – the selection of networks, technologies, selection of optimal network parameters. Additionally, two conducted experiments were discussed. The first was to predict the next values of WIG20 companies, exchange rates and cryptocurrencies. The second was based on investments in cryptocurrencies guided solely by the predictions of artificial intelligence. This was to check whether the investments guided by the predictions of such a program have a chance of effective earnings. The discussion of the results of the experiment includes an analysis of various interesting phenomena that occurred during its duration and a comprehensive presentation of the relatively high efficiency of the proposed solution, along with all kinds of graphs and comparisons with real data. The difficulties that occurred during the experiments, such as coronavirus or socio-economic events, such as riots in the USA, were also analyzed. Finally, elements were proposed that should be improved or included in future versions of the solution – taking into account world events, market anomalies and the use of supervised learning.
W artykule przedstawiono porównanie sieci RNN, GRU i LSTM w przewidywaniu przyszłych wartości szeregów czasowych na przykładzie walut i spółek giełdowych. Przedstawiono również etapy tworzenia aplikacji będącej realizacją analizowanego zagadnienia – dobór sieci, technologii, dobór optymalnych parametrów sieci. Dodatkowo omówiono dwa przeprowadzone eksperymenty. Pierwszym było przewidywanie kolejnych wartości spółek z WIG20, kursów walut i kryptowalut. Drugi opierał się na inwestycjach w kryptowaluty, kierując się wyłącznie przewidywaniami sztucznej inteligencji. Miało to na celu sprawdzenie, czy inwestowanie na podstawie przewidywania takiego programu pozwala na efektywne zarobki. Omówienie wyników eksperymentu obejmuje analizę różnych ciekawych zjawisk, które wystąpiły w czasie jego trwania oraz kompleksowe przedstawienie relatywnie wysokiej skuteczności proponowanego rozwiązania wraz z wszelkiego rodzaju wykresami i porównaniami z rzeczywistymi danymi. Analizowano również trudności, które wystąpiły podczas eksperymentów, takie jak koronawirus, wydarzenia społeczno-gospodarcze czy zamieszki w USA. Na koniec zaproponowano elementy, które należałoby ulepszyć lub uwzględnić w przyszłych wersjach rozwiązania, uwzględniając wydarzenia na świecie, anomalie rynkowe oraz wykorzystanie uczenia się nadzorowanego.
Źródło:
Zeszyty Naukowe. Telekomunikacja i Elektronika / Uniwersytet Technologiczno-Przyrodniczy w Bydgoszczy; 2020, 24; 13-30
1899-0088
Pojawia się w:
Zeszyty Naukowe. Telekomunikacja i Elektronika / Uniwersytet Technologiczno-Przyrodniczy w Bydgoszczy
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
An optimized parallel implementation of non-iteratively trained recurrent neural networks
Autorzy:
El Zini, Julia
Rizk, Yara
Awad, Mariette
Powiązania:
https://bibliotekanauki.pl/articles/2031147.pdf
Data publikacji:
2021
Wydawca:
Społeczna Akademia Nauk w Łodzi. Polskie Towarzystwo Sieci Neuronowych
Tematy:
GPU implementation
parallelization
Recurrent Neural Network
RNN
Long-short Term Memory
LSTM
Gated Recurrent Unit
GRU
Extreme Learning Machines
ELM
non-iterative training
Opis:
Recurrent neural networks (RNN) have been successfully applied to various sequential decision-making tasks, natural language processing applications, and time-series predictions. Such networks are usually trained through back-propagation through time (BPTT) which is prohibitively expensive, especially when the length of the time dependencies and the number of hidden neurons increase. To reduce the training time, extreme learning machines (ELMs) have been recently applied to RNN training, reaching a 99% speedup on some applications. Due to its non-iterative nature, ELM training, when parallelized, has the potential to reach higher speedups than BPTT. In this work, we present Opt-PR-ELM, an optimized parallel RNN training algorithm based on ELM that takes advantage of the GPU shared memory and of parallel QR factorization algorithms to efficiently reach optimal solutions. The theoretical analysis of the proposed algorithm is presented on six RNN architectures, including LSTM and GRU, and its performance is empirically tested on ten time-series prediction applications. Opt- PR-ELM is shown to reach up to 461 times speedup over its sequential counterpart and to require up to 20x less time to train than parallel BPTT. Such high speedups over new generation CPUs are extremely crucial in real-time applications and IoT environments.
Źródło:
Journal of Artificial Intelligence and Soft Computing Research; 2021, 11, 1; 33-50
2083-2567
2449-6499
Pojawia się w:
Journal of Artificial Intelligence and Soft Computing Research
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Influence of modelling phase transformations with the use of LSTM network on the accuracy of computations of residual stresses for the hardening process
Autorzy:
Wróbel, Joanna
Kulawik, Adam
Powiązania:
https://bibliotekanauki.pl/articles/27311451.pdf
Data publikacji:
2023
Wydawca:
Polska Akademia Nauk. Czasopisma i Monografie PAN
Tematy:
hardening process
temperature
phase transformations in the solid state
effective stresses
numerical modelling
RNN
recurrent neural network
proces hartowania
temperatura
przemiany fazowe w stanie stałym
modelowanie numeryczne
rekurencyjna sieć neuronowa
naprężenie efektywne
Opis:
Replacing mathematical models with artificial intelligence tools can play an important role in numerical models. This paper analyses the modeling of the hardening process in terms of temperature, phase transformations in the solid state and stresses in the elastic-plastic range. Currently, the use of artificial intelligence tools is increasing, both to make greater generalizations and to reduce possible errors in the numerical simulation process. It is possible to replace the mathematical model of phase transformations in the solid state with an artificial neural network (ANN). Such a substitution requires an ANN network that converts time series (temperature curves) into shares of phase transformations with a small training error. With an insufficient training level of the network, significant differences in stress values will occur due to the existing couplings. Long-Short-Term Memory (LSTM) networks were chosen for the analysis. The paper compares the differences in stress levels with two coupled models using a macroscopic model based on CCT diagram analysis and using the Johnson-Mehl-Avrami-Kolmogorov (JMAK) and Koistinen-Marburger (KM) equations, against the model memorized by the LSTM network. In addition, two levels of network training accuracy were also compared. Considering the results obtained from the model based on LSTM networks, it can be concluded that it is possible to effectively replace the classical model in modeling the phenomena of the heat treatment process.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2023, 71, 4; art. no. e145681
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-7 z 7

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies