Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "learning network" wg kryterium: Temat


Tytuł:
2D Cadastral Coordinate Transformation using extreme learning machine technique
Autorzy:
Ziggah, Y. Y.
Issaka, Y.
Laari, P. B.
Hui, Z.
Powiązania:
https://bibliotekanauki.pl/articles/145372.pdf
Data publikacji:
2018
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
transformacja współrzędnych
sieci neuronowe
dane geodezyjne
sieć radialna
coordinate transformation
extreme learning machine
backpropagation neural network
radial basis function neural network
geodetic datum
Opis:
Land surveyors, photogrammetrists, remote sensing engineers and professionals in the Earth sciences are often faced with the task of transferring coordinates from one geodetic datum into another to serve their desired purpose. The essence is to create compatibility between data related to different geodetic reference frames for geospatial applications. Strictly speaking, conventional techniques of conformal, affine and projective transformation models are mostly used to accomplish such task. With developing countries like Ghana where there is no immediate plans to establish geocentric datum and still rely on the astro-geodetic datums as it national mapping reference surface, there is the urgent need to explore the suitability of other transformation methods. In this study, an effort has been made to explore the proficiency of the Extreme Learning Machine (ELM) as a novel alternative coordinate transformation method. The proposed ELM approach was applied to data found in the Ghana geodetic reference network. The ELM transformation result has been analysed and compared with benchmark methods of backpropagation neural network (BPNN), radial basis function neural network (RBFNN), two-dimensional (2D) affine and 2D conformal. The overall study results indicate that the ELM can produce comparable transformation results to the widely used BPNN and RBFNN, but better than the 2D affine and 2D conformal. The results produced by ELM has demonstrated it as a promising tool for coordinate transformation in Ghana.
Źródło:
Geodesy and Cartography; 2018, 67, 2; 321-343
2080-6736
2300-2581
Pojawia się w:
Geodesy and Cartography
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A cloud-based urban monitoring system by using a quadcopter and intelligent learning techniques
Autorzy:
Khanmohammadi, Sohrab
Samadi, Mohammad
Powiązania:
https://bibliotekanauki.pl/articles/27314186.pdf
Data publikacji:
2022
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
urban monitoring
cloud computing
quadcopter
deep learning
fuzzy system
image processing
pattern recognition
bayesian network
intelligent techniques
learning systems
Opis:
The application of quadcopter and intelligent learning techniques in urban monitoring systems can improve flexibility and efficiency features. This paper proposes a cloud-based urban monitoring system that uses deep learning, fuzzy system, image processing, pattern recognition, and Bayesian network. The main objectives of this system are to monitor climate status, temperature, humidity, and smoke, as well as to detect fire occurrences based on the above intelligent techniques. The quadcopter transmits sensing data of the temperature, humidity, and smoke sensors, geographical coordinates, image frames, and videos to a control station via RF communications. In the control station side, the monitoring capabilities are designed by graphical tools to show urban areas with RGB colors according to the predetermined data ranges. The evaluation process illustrates simulation results of the deep neural network applied to climate status and effects of the sensors’ data changes on climate status. An illustrative example is used to draw the simulated area using RGB colors. Furthermore, circuit of the quadcopter side is designed using electric devices.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2022, 16, 2; 11--19
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A comparative study on performance of basic and ensemble classifiers with various datasets
Autorzy:
Gunakala, Archana
Shahid, Afzal Hussain
Powiązania:
https://bibliotekanauki.pl/articles/30148255.pdf
Data publikacji:
2023
Wydawca:
Polskie Towarzystwo Promocji Wiedzy
Tematy:
classification
Naïve Bayes
neural network
Support Vector Machine
Decision Tree
ensemble learning
Random Forest
Opis:
Classification plays a critical role in machine learning (ML) systems for processing images, text and high -dimensional data. Predicting class labels from training data is the primary goal of classification. An optimal model for a particular classification problem is chosen based on the model's performance and execution time. This paper compares and analyzes the performance of basic as well as ensemble classifiers utilizing 10-fold cross validation and also discusses their essential concepts, advantages, and disadvantages. In this study five basic classifiers namely Naïve Bayes (NB), Multi-layer Perceptron (MLP), Support Vector Machine (SVM), Decision Tree (DT), and Random Forest (RF) and the ensemble of all the five classifiers along with few more combinations are compared with five University of California Irvine (UCI) ML Repository datasets and a Diabetes Health Indicators dataset from Kaggle repository. To analyze and compare the performance of classifiers, evaluation metrics like Accuracy, Recall, Precision, Area Under Curve (AUC) and F-Score are used. Experimental results showed that SVM performs best on two out of the six datasets (Diabetes Health Indicators and waveform), RF performs best for Arrhythmia, Sonar, Tic-tac-toe datasets, and the best ensemble combination is found to be DT+SVM+RF on Ionosphere dataset having respective accuracies 72.58%, 90.38%, 81.63%, 73.59%, 94.78% and 94.01%. The proposed ensemble combinations outperformed the conven¬tional models for few datasets.
Źródło:
Applied Computer Science; 2023, 19, 1; 107-132
1895-3735
2353-6977
Pojawia się w:
Applied Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A distributed big data analytics model for traffic accidents classification and recognition based on SparkMlLib cores
Autorzy:
Mallahi, Imad El
Riffi, Jamal
Tairi, Hamid
Ez-Zahout, Abderrahmane
Mahraz, Mohamed Adnane
Powiązania:
https://bibliotekanauki.pl/articles/27314355.pdf
Data publikacji:
2022
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
big data
machine learning
traffic accident
severity prediction
convolutional neural network
Opis:
This paper focuses on the issue of big data analytics for traffic accident prediction based on SparkMllib cores; however, Spark’s Machine Learning Pipelines provide a helpful and suitable API that helps to create and tune classification and prediction models to decision-making concerning traffic accidents. Data scientists have recently focused on classification and prediction techniques for traffic accidents; data analytics techniques for feature extraction have also continued to evolve. Analysis of a huge volume of received data requires considerable processing time. Practically, the implementation of such processes in real-time systems requires a high computation speed. Processing speed plays an important role in traffic accident recognition in real-time systems. It requires the use of modern technologies and fast algorithms that increase the acceleration in extracting the feature parameters from traffic accidents. Problems with overclocking during the digital processing of traffic accidents have yet to be completely resolved. Our proposed model is based on advanced processing by the Spark MlLib core. We call on the real-time data streaming API on spark to continuously gather real-time data from multiple external data sources in the form of data streams. Secondly, the data streams are treated as unbound tables. After this, we call the random forest algorithm continuously to extract the feature parameters from a traffic accident. The use of this proposed method makes it possible to increase the speed factor on processors. Experiment results showed that the proposed method successfully extracts the accident features and achieves a seamless classification performance compared to other conventional traffic accident recognition algorithms. Finally, we share all detected accidents with details onto online applications with other users.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2022, 16, 4; 62--71
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A fast neural network learning algorithm with approximate singular value decomposition
Autorzy:
Jankowski, Norbert
Linowiecki, Rafał
Powiązania:
https://bibliotekanauki.pl/articles/330870.pdf
Data publikacji:
2019
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
Moore–Penrose pseudoinverse
radial basis function network
extreme learning machine
kernel method
machine learning
singular value decomposition
deep extreme learning
principal component analysis
pseudoodwrotność Moore–Penrose
radialna funkcja bazowa
maszyna uczenia ekstremalnego
uczenie maszynowe
analiza składników głównych
Opis:
The learning of neural networks is becoming more and more important. Researchers have constructed dozens of learning algorithms, but it is still necessary to develop faster, more flexible, or more accurate learning algorithms. With fast learning we can examine more learning scenarios for a given problem, especially in the case of meta-learning. In this article we focus on the construction of a much faster learning algorithm and its modifications, especially for nonlinear versions of neural networks. The main idea of this algorithm lies in the usage of fast approximation of the Moore–Penrose pseudo-inverse matrix. The complexity of the original singular value decomposition algorithm is O(mn2). We consider algorithms with a complexity of O(mnl), where l < n and l is often significantly smaller than n. Such learning algorithms can be applied to the learning of radial basis function networks, extreme learning machines or deep ELMs, principal component analysis or even missing data imputation.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2019, 29, 3; 581-594
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A few-shot fine-grained image recognition method
Autorzy:
Wang, Jianwei
Chen, Deyun
Powiązania:
https://bibliotekanauki.pl/articles/2204540.pdf
Data publikacji:
2023
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
few-shot learning
attention metric
CNN
convolutional neural network
feature expression
wskaźnik uwagi
sieć neuronowa splotowa
cechy wyrażeń
Opis:
Deep learning methods benefit from data sets with comprehensive coverage (e.g., ImageNet, COCO, etc.), which can be regarded as a description of the distribution of real-world data. The models trained on these datasets are considered to be able to extract general features and migrate to a domain not seen in downstream. However, in the open scene, the labeled data of the target data set are often insufficient. The depth models trained under a small amount of sample data have poor generalization ability. The identification of new categories or categories with a very small amount of sample data is still a challenging task. This paper proposes a few-shot fine-grained image recognition method. Feature maps are extracted by a CNN module with an embedded attention network to emphasize the discriminative features. A channel-based feature expression is applied to the base class and novel class followed by an improved cosine similarity-based measurement method to get the similarity score to realize the classification. Experiments are performed on main few-shot benchmark datasets to verify the efficiency and generality of our model, such as Stanford Dogs, CUB-200, and so on. The experimental results show that our method has more advanced performance on fine-grained datasets.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2023, 71, 1; art. no. e144584
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A genetic algorithm based optimized convolutional neural network for face recognition
Autorzy:
Karlupia, Namrata
Mahajan, Palak
Abrol, Pawanesh
Lehana, Parveen K.
Powiązania:
https://bibliotekanauki.pl/articles/2201023.pdf
Data publikacji:
2023
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
convolutional neural network
genetic algorithm
deep learning
evolutionary technique
sieć neuronowa konwolucyjna
algorytm genetyczny
uczenie głębokie
technika ewolucyjna
Opis:
Face recognition (FR) is one of the most active research areas in the field of computer vision. Convolutional neural networks (CNNs) have been extensively used in this field due to their good efficiency. Thus, it is important to find the best CNN parameters for its best performance. Hyperparameter optimization is one of the various techniques for increasing the performance of CNN models. Since manual tuning of hyperparameters is a tedious and time-consuming task, population based metaheuristic techniques can be used for the automatic hyperparameter optimization of CNNs. Automatic tuning of parameters reduces manual efforts and improves the efficiency of the CNN model. In the proposed work, genetic algorithm (GA) based hyperparameter optimization of CNNs is applied for face recognition. GAs are used for the optimization of various hyperparameters like filter size as well as the number of filters and of hidden layers. For analysis, a benchmark dataset for FR with ninety subjects is used. The experimental results indicate that the proposed GA-CNN model generates an improved model accuracy in comparison with existing CNN models. In each iteration, the GA minimizes the objective function by selecting the best combination set of CNN hyperparameters. An improved accuracy of 94.5% is obtained for FR.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2023, 33, 1; 21--31
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A High-Accuracy of Transmission Line Faults (TLFs) Classification Based on Convolutional Neural Network
Autorzy:
Fuada, S.
Shiddieqy, H. A.
Adiono, T.
Powiązania:
https://bibliotekanauki.pl/articles/1844462.pdf
Data publikacji:
2020
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
fault detection
fault classification
transmission lines
convolutional neural network
machine learning
Opis:
To improve power system reliability, a protection mechanism is highly needed. Early detection can be used to prevent failures in the power transmission line (TL). A classification system method is widely used to protect against false detection as well as assist the decision analysis. Each TL signal has a continuous pattern in which it can be detected and classified by the conventional methods, i.e., wavelet feature extraction and artificial neural network (ANN). However, the accuracy resulting from these mentioned models is relatively low. To overcome this issue, we propose a machine learning-based on Convolutional Neural Network (CNN) for the transmission line faults (TLFs) application. CNN is more suitable for pattern recognition compared to conventional ANN and ANN with Discrete Wavelet Transform (DWT) feature extraction. In this work, we first simulate our proposed model by using Simulink® and Matlab®. This simulation generates a fault signal dataset, which is divided into 45.738 data training and 4.752 data tests. Later, we design the number of machine learning classifiers. Each model classifier is trained by exposing it to the same dataset. The CNN design, with raw input, is determined as an optimal output model from the training process with 100% accuracy.
Źródło:
International Journal of Electronics and Telecommunications; 2020, 66, 4; 655-664
2300-1933
Pojawia się w:
International Journal of Electronics and Telecommunications
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A hybrid approach of a deep learning technique for real-time ECG beat detection
Autorzy:
Patro, Kiran Kumar
Prakash, Allam Jaya
Samantray, Saunak
Pławiak, Joanna
Tadeusiewicz, Ryszard
Pławiak, Paweł
Powiązania:
https://bibliotekanauki.pl/articles/2172118.pdf
Data publikacji:
2022
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
cardiac abnormalities
CAD
convolutional neural network
CNN
deep learning
ECG
electrocardiogram
supra ventricular ectopic beats
SVE
nieprawidłowości kardiologiczne
sieć neuronowa konwolucyjna
uczenie głębokie
EKG
elektrokardiogram
Opis:
This paper presents a new customized hybrid approach for early detection of cardiac abnormalities using an electrocardiogram (ECG). The ECG is a bio-electrical signal that helps monitor the heart’s electrical activity. It can provide health information about the normal and abnormal physiology of the heart. Early diagnosis of cardiac abnormalities is critical for cardiac patients to avoid stroke or sudden cardiac death. The main aim of this paper is to detect crucial beats that can damage the functioning of the heart. Initially, a modified Pan–Tompkins algorithm identifies the characteristic points, followed by heartbeat segmentation. Subsequently, a different hybrid deep convolutional neural network (CNN) is proposed to experiment on standard and real-time long-term ECG databases. This work successfully classifies several cardiac beat abnormalities such as supra-ventricular ectopic beats (SVE), ventricular beats (VE), intra-ventricular conduction disturbances beats (IVCD), and normal beats (N). The obtained classification results show a better accuracy of 99.28% with an F1 score of 99.24% with the MIT–BIH database and a descent accuracy of 99.12% with the real-time acquired database.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2022, 32, 3; 455--465
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A hybrid control strategy for a dynamic scheduling problem in transit networks
Autorzy:
Liu, Zhongshan
Yu, Bin
Zhang, Li
Wang, Wensi
Powiązania:
https://bibliotekanauki.pl/articles/2172126.pdf
Data publikacji:
2022
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
service reliability
transit network
proactive control method
deep reinforcement learning
hybrid control strategy
niezawodność usług
sieć tranzytowa
uczenie głębokie
kontrola hybrydowa
Opis:
Public transportation is often disrupted by disturbances, such as the uncertain travel time caused by road congestion. Therefore, the operators need to take real-time measures to guarantee the service reliability of transit networks. In this paper, we investigate a dynamic scheduling problem in a transit network, which takes account of the impact of disturbances on bus services. The objective is to minimize the total travel time of passengers in the transit network. A two-layer control method is developed to solve the proposed problem based on a hybrid control strategy. Specifically, relying on conventional strategies (e.g., holding, stop-skipping), the hybrid control strategy makes full use of the idle standby buses at the depot. Standby buses can be dispatched to bus fleets to provide temporary or regular services. Besides, deep reinforcement learning (DRL) is adopted to solve the problem of continuous decision-making. A long short-term memory (LSTM) method is added to the DRL framework to predict the passenger demand in the future, which enables the current decision to adapt to disturbances. The numerical results indicate that the hybrid control strategy can reduce the average headway of the bus fleet and improve the reliability of bus service.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2022, 32, 4; 553--567
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A hybrid two-stage SqueezeNet and support vector machine system for Parkinson’s disease detection based on handwritten spiral patterns
Autorzy:
Bernardo, Lucas Salvador
Damaševičius, Robertas
de Albuquerque, Victor Hugo C.
Maskeliūnas, Rytis
Powiązania:
https://bibliotekanauki.pl/articles/2055162.pdf
Data publikacji:
2021
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
Parkinson’s disease
spirography
convolutional neural network
deep learning
choroba Parkinsona
spirografia
sieć neuronowa konwolucyjna
uczenie głębokie
Opis:
Parkinson’s disease (PD) is the second most common neurological disorder in the world. Nowadays, it is estimated that it affects from 2% to 3% of the global population over 65 years old. In clinical environments, a spiral drawing task is performed to help to obtain the disease’s diagnosis. The spiral trajectory differs between people with PD and healthy ones. This paper aims to analyze differences between handmade drawings of PD patients and healthy subjects by applying the SqueezeNet convolutional neural network (CNN) model as a feature extractor, and a support vector machine (SVM) as a classifier. The dataset used for training and testing consists of 514 handwritten draws of Archimedes’ spiral images derived from heterogeneous sources (digital and paper-based), from which 296 correspond to PD patients and 218 to healthy subjects. To extract features using the proposed CNN, a model is trained and 20% of its data is used for testing. Feature extraction results in 512 features, which are used for SVM training and testing, while the performance is compared with that of other machine learning classifiers such as a Gaussian naive Bayes (GNB) classifier (82.61%) and a random forest (RF) (87.38%). The proposed method displays an accuracy of 91.26%, which represents an improvement when compared to pure CNN-based models such as SqueezeNet (85.29%), VGG11 (87.25%), and ResNet (89.22%).
Źródło:
International Journal of Applied Mathematics and Computer Science; 2021, 31, 4; 549--561
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A new method of cardiac sympathetic index estimation using a 1D-convolutional neural network
Autorzy:
Kołodziej, Marcin
Majkowski, Andrzej
Tarnowski, Paweł
Rak, Remigiusz Jan
Rysz, Andrzej
Powiązania:
https://bibliotekanauki.pl/articles/2090741.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
epilepsy
seizure detection
seizure prediction
convolutional neural network
deep learning
ECG
HRV
cardiac sympathetic index
padaczka
wykrywanie napadu
przewidywanie napadu
splotowa sieć neuronowa
głęboka nauka
technika deep learning
EKG
wskaźnik współczulny serca
Opis:
Epilepsy is a neurological disorder that causes seizures of many different types. The article presents an analysis of heart rate variability (HRV) for epileptic seizure prediction. Considering that HRV is nonstationary, our research focused on the quantitative analysis of a Poincare plot feature, i.e. cardiac sympathetic index (CSI). It is reported that the CSI value increases before the epileptic seizure. An algorithm using a 1D-convolutional neural network (1D-CNN) was proposed for CSI estimation. The usability of this method was checked for 40 epilepsy patients. Our algorithm was compared with the method proposed by Toichi et al. The mean squared error (MSE) for testing data was 0.046 and the mean absolute percentage error (MAPE) amounted to 0.097. The 1D-CNN algorithm was also compared with regression methods. For this purpose, a classical type of neural network (MLP), as well as linear regression and SVM regression, were tested. In the study, typical artifacts occurring in ECG signals before and during an epileptic seizure were simulated. The proposed 1D-CNN algorithm estimates CSI well and is resistant to noise and artifacts in the ECG signal.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2021, 69, 3; e136921, 1--9
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A new method of cardiac sympathetic index estimation using a 1D-convolutional neural network
Autorzy:
Kołodziej, Marcin
Majkowski, Andrzej
Tarnowski, Paweł
Rak, Remigiusz Jan
Rysz, Andrzej
Powiązania:
https://bibliotekanauki.pl/articles/2173565.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
epilepsy
seizure detection
seizure prediction
convolutional neural network
deep learning
ECG
HRV
cardiac sympathetic index
padaczka
wykrywanie napadu
przewidywanie napadu
splotowa sieć neuronowa
głęboka nauka
technika deep learning
EKG
wskaźnik współczulny serca
Opis:
Epilepsy is a neurological disorder that causes seizures of many different types. The article presents an analysis of heart rate variability (HRV) for epileptic seizure prediction. Considering that HRV is nonstationary, our research focused on the quantitative analysis of a Poincare plot feature, i.e. cardiac sympathetic index (CSI). It is reported that the CSI value increases before the epileptic seizure. An algorithm using a 1D-convolutional neural network (1D-CNN) was proposed for CSI estimation. The usability of this method was checked for 40 epilepsy patients. Our algorithm was compared with the method proposed by Toichi et al. The mean squared error (MSE) for testing data was 0.046 and the mean absolute percentage error (MAPE) amounted to 0.097. The 1D-CNN algorithm was also compared with regression methods. For this purpose, a classical type of neural network (MLP), as well as linear regression and SVM regression, were tested. In the study, typical artifacts occurring in ECG signals before and during an epileptic seizure were simulated. The proposed 1D-CNN algorithm estimates CSI well and is resistant to noise and artifacts in the ECG signal.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2021, 69, 3; art. no. e136921
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A novel method for automatic detection of arrhythmias using the unsupervised convolutional neural network
Autorzy:
Zhang, Junming
Yao, Ruxian
Gao, Jinfeng
Li, Gangqiang
Wu, Haitao
Powiązania:
https://bibliotekanauki.pl/articles/23944827.pdf
Data publikacji:
2023
Wydawca:
Społeczna Akademia Nauk w Łodzi. Polskie Towarzystwo Sieci Neuronowych
Tematy:
convolutional neural network
arrhythmia detection
unsupervised learning
ECG classification
Opis:
In recent years, various models based on convolutional neural networks (CNN) have been proposed to solve the cardiac arrhythmia detection problem and achieved saturated accuracy. However, these models are often viewed as “blackbox” and lack of interpretability, which hinders the understanding of cardiologists, and ultimately hinders the clinical use of intelligent terminals. At the same time, most of these approaches are supervised learning and require label data. It is a time-consuming and expensive process to obtain label data. Furthermore, in human visual cortex, the importance of lateral connection is same as feed-forward connection. Until now, CNN based on lateral connection have not been studied thus far. Consequently, in this paper, we combines CNNs, lateral connection and autoencoder (AE) to propose the building blocks of lateral connection convolutional autoencoder neural networks (LCAN) for cardiac arrhythmia detection, which learn representations in an unsupervised manner. Concretely, the LCAN contains a convolution layer, a lateral connection layer, an AE layer, and a pooling layer. The LCAN detects salient wave features through the lateral connection layer. The AE layer and competitive learning is used to update the filters of the convolution network—an unsupervised process that ensures similar weight distribution for all adjacent filters in each convolution layer and realizes the neurons’ semantic arrangement in the LCAN. To evaluate the performances of the proposed model, we have implemented the experiments on the well-known MIT–BIH Arrhythmia Database. The proposed model yields total accuracies and kappa coefficients of 98% and 0.95, respectively. The experiment results show that the LCAN is not only effective, but also a useful tool for arrhythmia detection.
Źródło:
Journal of Artificial Intelligence and Soft Computing Research; 2023, 13, 3; 181--196
2083-2567
2449-6499
Pojawia się w:
Journal of Artificial Intelligence and Soft Computing Research
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A novel reliability estimation method of multi-state system based on structure learning algorithm
Nowatorska metoda oceny niezawodności systemów wielostanowych w oparciu o algorytm uczenia struktury
Autorzy:
Li, Zhifeng
Wang, Zili
Ren, Yi
Yang, Dezhen
Lv, Xing
Powiązania:
https://bibliotekanauki.pl/articles/301718.pdf
Data publikacji:
2020
Wydawca:
Polska Akademia Nauk. Polskie Naukowo-Techniczne Towarzystwo Eksploatacyjne PAN
Tematy:
reliability analysis
Bayesian network
structure learning
multi-state system (MSS)
dependent failure
analiza niezawodności
sieć bayesowska
uczenie struktury
system wielostanowy
uszkodzenie zależne
Opis:
Traditional reliability models, such as fault tree analysis (FTA) and reliability block diagram (RBD), are typically constructed with reference to the function principle graph that is produced by system engineers, which requires substantial time and effort. In addition, the quality and correctness of the models depend on the ability and experience of the engineers and the models are difficult to verify. With the development of data acquisition, data mining and system modeling techniques, the operational data of a complex system considering multi-state, dependent behavior can be obtained and analyzed automatically. In this paper, we present a method that is based on the K2 algorithm for establishing a Bayesian network (BN) for estimating the reliability of a multi-state system with dependent behavior. Facilitated by BN tools, the reliability modeling and the reliability estimation can be conducted automatically. An illustrative example is used to demonstrate the performance of the method.
Tradycyjne modele niezawodności, takie jak analiza drzewa błędów (FTA) czy schemat blokowy niezawodności (RBD), buduje się zazwyczaj w oparciu o tworzone przez inżynierów systemowych schematy zasad działania systemu, których przygotowanie wymaga dużych nakładów czasu i pracy. Jakość i poprawność tych modeli zależy od umiejętności i doświadczenia inżynierów, a same modele są trudne do zweryfikowania. Dzięki rozwojowi technik akwizycji i eksploracji danych oraz modelowania systemów, dane operacyjne złożonego systemu uwzględniające jego zależne, wielostanowe zachowania mogą być pozyskiwane i analizowane automatycznie. W artykule przedstawiono metodę konstrukcji sieci bayesowskiej (BN) opartą na algorytmie K2, która pozwala na ocenę niezawodności systemu wielostanowego o zachowaniach zależnych. Dzięki narzędziom BN, modelowanie i szacowanie niezawodności może odbywać się automatycznie. Działanie omawianej metody zilustrowano na podstawie przykładu.
Źródło:
Eksploatacja i Niezawodność; 2020, 22, 1; 170-178
1507-2711
Pojawia się w:
Eksploatacja i Niezawodność
Dostawca treści:
Biblioteka Nauki
Artykuł

Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies