Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "learning network" wg kryterium: Temat


Tytuł:
Network Traffic Classification in an NFV Environment using Supervised ML Algorithms
Autorzy:
Ilievski, Gjorgji
Latkoski, Pero
Powiązania:
https://bibliotekanauki.pl/articles/1839335.pdf
Data publikacji:
2021
Wydawca:
Instytut Łączności - Państwowy Instytut Badawczy
Tematy:
classification
machine learning
network functions virtualization
network traffic
Opis:
We have conducted research on the performance of six supervised machine learning (ML) algorithms used for network traffic classification in a virtual environment driven by network function virtualization (NFV). The performance-related analysis focused on the precision of the classification process, but also in time-intensity (speed) of the supervised ML algorithms. We devised specific traffic taxonomy using commonly used categories, with particular emphasis placed on VoIP and encrypted VoIP protocols serve as a basis of the 5G architecture. NFV is considered to be one of the foundations of 5G development, as the traditional networking components are fully virtualized, in many cases relaying on mixed cloud solutions, both of the premise- and public cloud-based variety. Virtual machines are being replaced by containers and application functions while most of the network traffic is flowing in the east-west direction within the cloud. The analysis performed has shown that in such an environment, the Decision Tree algorithm is best suited, among the six algorithms considered, for performing classification-related tasks, and offers the required speed that will introduce minimal delays in network flows, which is crucial in 5G networks, where packet delay requirements are of great significance. It has proven to be reliable and offered excellent overall performance across multiple network packet classes within a virtualized NFV network architecture. While performing the classification procedure, we were working only with the statistical network flow features, leaving out packet payload, source, destination- and port-related information, thus making the analysis valid not only from the technical, but also from the regulatory point of view.
Źródło:
Journal of Telecommunications and Information Technology; 2021, 3; 23-31
1509-4553
1899-8852
Pojawia się w:
Journal of Telecommunications and Information Technology
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Ensemble Model for Network Intrusion Detection System Based on Bagging Using J48
Autorzy:
Otoom, Mohammad Mahmood
Sattar, Khalid Nazim Abdul
Al Sadig, Mutasim
Powiązania:
https://bibliotekanauki.pl/articles/2201908.pdf
Data publikacji:
2023
Wydawca:
Stowarzyszenie Inżynierów i Techników Mechaników Polskich
Tematy:
cyber security
network intrusion
ensemble learning
machine learning
ML
Opis:
Technology is rising on daily basis with the advancement in web and artificial intelligence (AI), and big data developed by machines in various industries. All of these provide a gateway for cybercrimes that makes network security a challenging task. There are too many challenges in the development of NID systems. Computer systems are becoming increasingly vulnerable to attack as a result of the rise in cybercrimes, the availability of vast amounts of data on the internet, and increased network connection. This is because creating a system with no vulnerability is not theoretically possible. In the previous studies, various approaches have been developed for the said issue each with its strengths and weaknesses. However, still there is a need for minimal variance and improved accuracy. To this end, this study proposes an ensemble model for the said issue. This model is based on Bagging with J48 Decision Tree. The proposed models outperform other employed models in terms of improving accuracy. The outcomes are assessed via accuracy, recall, precision, and f-measure. The overall average accuracy achieved by the proposed model is 83.73%.
Źródło:
Advances in Science and Technology. Research Journal; 2023, 17, 2; 322--329
2299-8624
Pojawia się w:
Advances in Science and Technology. Research Journal
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Performance Analysis of LEACH with Deep Learning in Wireless Sensor Networks
Autorzy:
Prajapati, Hardik K.
Joshi, Rutvij
Powiązania:
https://bibliotekanauki.pl/articles/2200710.pdf
Data publikacji:
2022
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
machine learning
Deep learning
Convolutional Neural Network (CNN)
LEACH
Opis:
Thousands of low-power micro sensors make up Wireless Sensor Networks, and its principal role is to detect and report specified events to a base station. Due to bounded battery power these nodes are having very limited memory and processing capacity. Since battery replacement or recharge in sensor nodes is nearly impossible, power consumption becomes one of the most important design considerations in WSN. So one of the most important requirements in WSN is to increase battery life and network life time. Seeing as data transmission and reception consume the most energy, it’s critical to develop a routing protocol that addresses the WSN’s major problem. When it comes to sending aggregated data to the sink, hierarchical routing is critical. This research concentrates on a cluster head election system that rotates the cluster head role among nodes with greater energy levels than the others.We used a combination of LEACH and deep learning to extend the network life of the WSN in this study. In this proposed method, cluster head selection has been performed by Convolutional Neural Network (CNN). The comparison has been done between the proposed solution and LEACH, which shows the proposed solution increases the network lifetime and throughput.
Źródło:
International Journal of Electronics and Telecommunications; 2022, 68, 4; 799--805
2300-1933
Pojawia się w:
International Journal of Electronics and Telecommunications
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Intrusion Detection in Software Defined Networks with Self-organized Maps
Autorzy:
Jankowski, D.
Amanowicz, M.
Powiązania:
https://bibliotekanauki.pl/articles/308109.pdf
Data publikacji:
2015
Wydawca:
Instytut Łączności - Państwowy Instytut Badawczy
Tematy:
IDS dataset
machine learning
metasploit
network security
network simulation
OpenFlow
virtualization
Opis:
The Software Defined Network (SDN) architecture provides new opportunities to implement security mechanisms in terms of unauthorized activities detection. At the same time, there are certain risks associated with this technology. The presented approach covers a conception of the measurement method, virtual testbed and classification mechanism for SDNs. The paper presents a measurement method which allows collecting network traffic flow parameters, generated by a virtual SDN environment. The collected dataset can be used in machine learning methods to detect unauthorized activities.
Źródło:
Journal of Telecommunications and Information Technology; 2015, 4; 3-9
1509-4553
1899-8852
Pojawia się w:
Journal of Telecommunications and Information Technology
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Development of an automated assembly process supported with an artificial neural network
Autorzy:
Bobka, P.
Heyn, J.
Henningson, J.-O.
Römer, M.
Engbers, T.
Dietrich, F.
Dröder, K.
Powiązania:
https://bibliotekanauki.pl/articles/99408.pdf
Data publikacji:
2018
Wydawca:
Wrocławska Rada Federacji Stowarzyszeń Naukowo-Technicznych
Tematy:
assembly
machine learning
neural network
industrial robot
Opis:
A central problem in automated assembly is the ramp-up phase. In order to achieve the required tolerances and cycle times, assembly parameters must be determined by extensive manual parameter variations. Therefore, the duration of the ramp-up phase represents a planning uncertainty and a financial risk, especially when high demands are placed on dynamics and precision. To complete this phase as efficiently as possible, comprehensive planning and experienced personnel are necessary. In this paper, we examine the use of machine learning techniques for the ramp-up of an automated assembly process. Specifically we use a deep artificial neural network to learn process parameters for pick-and-place operations of planar objects. We describe how the handling parameters of an industrial robot can be adjusted and optimized automatically by artificial neural networks and examine this approach in laboratory experiments. Furthermore, we test whether an artificial neural network can be used to optimize assembly parameters in process as an adaptive process controller. Finally, we discuss the advantages and disadvantages of the described approach for the determination of optimal assembly parameters in the ramp-up phase and during the utilization phase.
Źródło:
Journal of Machine Engineering; 2018, 18, 3; 28-41
1895-7595
2391-8071
Pojawia się w:
Journal of Machine Engineering
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Detection of the presence of rail corrugation using convolutional neural network
Autorzy:
Tabaszewski, Maciej
Firlik, Bartosz
Powiązania:
https://bibliotekanauki.pl/articles/38890045.pdf
Data publikacji:
2022
Wydawca:
Instytut Podstawowych Problemów Techniki PAN
Tematy:
corrugation
vibration and noise
machine learning
convolutional network
Opis:
Rail corrugation is a significant problem not only in heavy-haul freight but also in light rail systems. Over the last years, considerable progress has been made in understanding, measuring and treating corrugation problems also considered a matter of safety. In the presented research, convolutional neural networks (CNNs) are used to identify the occurrence of rail corrugation in light rail systems. The paper shows that by simultaneously measuring the vibration and the sound pressure, it is possible to identify the rail corrugation with a very small error.
Źródło:
Engineering Transactions; 2022, 70, 4; 339-353
0867-888X
Pojawia się w:
Engineering Transactions
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Classification of traffic over collaborative iot/cloud platforms using deep-learning recurrent LSTM
Autorzy:
Patil, Sonali A.
Raj, Arun L.
Powiązania:
https://bibliotekanauki.pl/articles/2097958.pdf
Data publikacji:
2021
Wydawca:
Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie. Wydawnictwo AGH
Tematy:
IoT
network traffic
machine learning
classification
cloud computing
Opis:
The Internet of Things (IoT) and cloud-based collaborative platforms have emerged as new infrastructures over the recent decades. The classification of network traffic in terms of benign and malevolent traffic is indispensable for IoT/cloud-based collaborative platforms for optimally utilizing channel capac ity for transmitting benign traffic and blocking malicious traffic. The traffic classification mechanism should be dynamic and capable enough for classifying network traffic in a quick manner so that malevolent traffic can be identified at earlier stages and benign traffic can be speedily channelized to the destined nodes. In this paper, we present a deep-learning recurrent LSTM RNet-based technique for classifying traffic over IoT/cloud platforms using the Word2Vec approach. Machine-learning techniques (MLTs) have also been employed for comparing the performance of these techniques with the proposed LSTM RNet classification method. In the proposed research work, network traffic is clas sified into three classes: Tor-Normal, NonTor-Normal, and NonTor-Malicious traffic. The research outcome shows that the proposed LSTM RNet accurately classifies such traffic and also helps reduce network latency as well as enhance data transmission rates and network throughput.
Źródło:
Computer Science; 2021, 22 (3); 367-385
1508-2806
2300-7036
Pojawia się w:
Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
On Efficiency of Selected Machine Learning Algorithms for Intrusion Detection in Software Defined Networks
Autorzy:
Jankowski, D.
Amanowicz, M.
Powiązania:
https://bibliotekanauki.pl/articles/963945.pdf
Data publikacji:
2016
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
software defined network
intrusion detection
machine learning
Mininet
SDN
Opis:
We propose a concept of using Software Defined Network (SDN) technology and machine learning algorithms for monitoring and detection of malicious activities in the SDN data plane. The statistics and features of network traffic are generated by the native mechanisms of SDN technology.In order to conduct tests and a verification of the concept, it was necessary to obtain a set of network workload test data.We present virtual environment which enables generation of the SDN network traffic.The article examines the efficiency of selected machine learning methods: Self Organizing Maps and Learning Vector Quantization and their enhanced versions.The results are compared with other SDN-based IDS.
Źródło:
International Journal of Electronics and Telecommunications; 2016, 62, 3; 247-252
2300-1933
Pojawia się w:
International Journal of Electronics and Telecommunications
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Predictive modelling of turbofan engine components condition using machine and deep learning methods
Autorzy:
Matuszczak, Michał
Żbikowski, Mateusz
Teodorczyk, Andrzej
Powiązania:
https://bibliotekanauki.pl/articles/1841686.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Polskie Naukowo-Techniczne Towarzystwo Eksploatacyjne PAN
Tematy:
reliability
prognostics
deep learning
machine learning
gas turbine
turbofan engine
neural network
condition-based maintenance
Opis:
The article proposes an approach based on deep and machine learning models to predict a component failure as an enhancement of condition based maintenance scheme of a turbofan engine and reviews currently used prognostics approaches in the aviation industry. Component degradation scale representing its life consumption is proposed and such collected condition data are combined with engines sensors and environmental data. With use of data manipulation techniques, a framework for models training is created and models' hyperparameters obtained through Bayesian optimization. Models predict the continuous variable representing condition based on the input. Best performed model is identified by detemining its score on the holdout set. Deep learning models achieved 0.71 MSE score (ensemble meta-model of neural networks) and outperformed significantly machine learning models with their best score at 1.75. The deep learning models shown their feasibility to predict the component condition within less than 1 unit of the error in the rank scale.
Źródło:
Eksploatacja i Niezawodność; 2021, 23, 2; 359-370
1507-2711
Pojawia się w:
Eksploatacja i Niezawodność
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Artificial neural network (ANN) modelling to estimate bubble size from macroscopic image and object features
Autorzy:
Vinnett, Luis
León, Roberto
Mesa, Diego
Powiązania:
https://bibliotekanauki.pl/articles/29552038.pdf
Data publikacji:
2023
Wydawca:
Politechnika Wrocławska. Oficyna Wydawnicza Politechniki Wrocławskiej
Tematy:
machine learning
artificial neural network
flotation
bubble size
Sauter diameter
Opis:
Bubble size measurements in aerated systems such as froth flotation cells are critical for controlling gas dispersion. Commonly, bubbles are measured by obtaining representative photographs, which are then analyzed using segmentation and identification software tools. Recent developments have focused on enhancing these segmentation tools. However, the main challenges around complex bubble cluster segmentation remain unresolved, while the tools to tackle these challenges have become increasingly complex and computationally expensive. In this work, we propose an alternative solution, circumventing the need for image segmentation and bubble identification. An Artificial Neural Network (ANN) was trained to estimate the Sauter mean bubble size (D32) based on macroscopic image features obtained with simple and inexpensive image analysis. The results showed excellent prediction accuracy, with a correlation coefficient, R, over 0.998 in the testing stage, and without bias in its error distribution. This machine learning tool paves the way for robust and fast estimation of bubble size under complex bubble images, without the need of image segmentation.
Źródło:
Physicochemical Problems of Mineral Processing; 2023, 59, 5; art. no. 185759
1643-1049
2084-4735
Pojawia się w:
Physicochemical Problems of Mineral Processing
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Atrial fibrillation detection on electrocardiograms with convolutional neural networks
Detekcja migotania przedsionków na elektrokardiogramach z wykorzystaniem konwolucyjnej sieci neuronowej
Autorzy:
Kifer, Viktor
Zagorodna, Natalia
Hevko, Olena
Powiązania:
https://bibliotekanauki.pl/articles/408581.pdf
Data publikacji:
2019
Wydawca:
Politechnika Lubelska. Wydawnictwo Politechniki Lubelskiej
Tematy:
electrocardiography
machine learning
neural network
elektrokardiografia
nauczanie maszynowe
sieć neuronowa
Opis:
In this paper, we present our research which confirms the suitability of the convolutional neural network usage for the classification of single-lead ECG recordings. The proposed method was designed for classifying normal sinus rhythm, atrial fibrillation (AF), non-AF related other abnormal heart rhythms and noisy signals. The method combines manually selected features with the features learned by the deep neural network. The Physionet Challenge 2017 dataset of over 8500 ECG recordings was used for the model training and validation. The trained model reaches an average F1-score 0.71 in classifying normal sinus rhythm, AF and other rhythms respectively.
W tej pracy, przedstawiamy nasze badania, które potwierdzają przydatność zastosowania konwolucyjnych sieci neuronowych dla klasyfikacji zapisów jedno-odprowadzeniowego EKG. (tak brzmi ta nazwa). Proponowana metoda została zaprojektowana dla klasyfikowania prawidłowego rytmu zatokowego, migotania przedsionków (AF), poza-AF powiązanych z innymi nieprawidłowymi rytmami serca i zaszumionymi (głośnymi?) sygnałami. Ta metoda łączy cechy wyselekcjonowane ręcznie z cechami wyuczonymi przez głębokie sieci neuronowe. Zbiór danych Physionet Challenge 2017 zawierający ponad 8500 zapisów EKG został zastosowany dla modelu szkolenia oraz walidacji. Model nauczony (wyszkolony?) osiąga odpowiednio średni F1-wynik 0.71 w klasyfikowaniu prawidłowego rytmu zatokowego, rytmu AF oraz innych rytmów.
Źródło:
Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska; 2019, 9, 4; 69-73
2083-0157
2391-6761
Pojawia się w:
Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Detection of Monocrystalline Silicon Wafer Defects Using Deep Transfer Learning
Autorzy:
Ganum, Adriana
Iskandar, D. N. F. Awang
Chin, Lim Phei
Fauzi, Ahmad Hadinata
Powiązania:
https://bibliotekanauki.pl/articles/2058502.pdf
Data publikacji:
2022
Wydawca:
Instytut Łączności - Państwowy Instytut Badawczy
Tematy:
automated optical inspection
machine learning
neural network
wafer imperfection identification
Opis:
Defect detection is an important step in industrial production of monocrystalline silicon. Through the study of deep learning, this work proposes a framework for classifying monocrystalline silicon wafer defects using deep transfer learning (DTL). An existing pre-trained deep learning model was used as the starting point for building a new model. We studied the use of DTL and the potential adaptation of Mo bileNetV2 that was pre-trained using ImageNet for extracting monocrystalline silicon wafer defect features. This has led to speeding up the training process and to improving performance of the DTL-MobileNetV2 model in detecting and classifying six types of monocrystalline silicon wafer defects (crack, double contrast, hole, microcrack, saw-mark and stain). The process of training the DTL-MobileNetV2 model was optimized by relying on the dense block layer and global average pooling (GAP) method which had accelerated the convergence rate and improved generalization of the classification network. The monocrystalline silicon wafer defect classification technique relying on the DTL-MobileNetV2 model achieved the accuracy rate of 98.99% when evaluated against the testing set. This shows that DTL is an effective way of detecting different types of defects in monocrystalline silicon wafers, thus being suitable for minimizing misclassification and maximizing the overall production capacities.
Źródło:
Journal of Telecommunications and Information Technology; 2022, 1; 34--42
1509-4553
1899-8852
Pojawia się w:
Journal of Telecommunications and Information Technology
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
An investigation of the relationship between encoder difference and thermo-elastic machine tool deformation
Autorzy:
Brecher, Christian
Dehn, Mathias
Neus, Stephan
Powiązania:
https://bibliotekanauki.pl/articles/24084708.pdf
Data publikacji:
2023
Wydawca:
Wrocławska Rada Federacji Stowarzyszeń Naukowo-Technicznych
Tematy:
machine tool
thermal error compensation
machine learning
artificial neural network
Opis:
New approaches, using machine learning to model the thermo-elastic machine tool error, often rely on machine internal data, like axis speed or axis position as input data, which have a delayed relation to the thermo-elastic error. Since there is no direct relation to the thermo-elastic error, this can lead to an increased computation inaccuracy of the model or the need for expensive sensor equipment for additional input data. The encoder difference is easy to obtain and has a direct relationship with the thermo-elastic error and therefore has a high potential to improve the accuracy thermo-elastic error models. This paper first investigates causes of the encoder difference and its relationship with the thermo-elastic error. Afterwards, the model is presented, which uses the encoder difference to compute the thermo-elastic error. Due to the complexity of the relationship, it is necessary, to use a machine learning approach for this. To conclude, the potential of the encoder difference as an input of the model is evaluated.
Źródło:
Journal of Machine Engineering; 2023, 23, 3; 26--37
1895-7595
2391-8071
Pojawia się w:
Journal of Machine Engineering
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
An AI & ML based detection & identification in remote imagery: state-of-the-art
Autorzy:
Hashmi, Hina
Dwivedi, Rakesh
Kumar, Anil
Powiązania:
https://bibliotekanauki.pl/articles/2141786.pdf
Data publikacji:
2021
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
convolutional neural network
remote sensed imagery
object detection
artificial intelligence
feature extraction
deep learning
machine learning
Opis:
Remotely sensed images and their allied areas of application have been the charm for a long time among researchers. Remote imagery has a vast area in which it is serving and achieving milestones. From the past, after the advent of AL, ML, and DL-based computing, remote imagery is related techniques for processing and analyzing are continuously growing and offering countless services like traffic surveillance, earth observation, land surveying, and other agricultural areas. As Artificial intelligence has become the charm of researchers, machine learning and deep learning have been proven as the most commonly used and highly effective techniques for object detection. AI & ML-based object segmentation & detection makes this area hot and fond to the researchers again with the opportunities of enhanced accuracy in the same. Several researchers have been proposed their works in the form of research papers to highlight the effectiveness of using remotely sensed imagery for commercial purposes. In this article, we have discussed the concept of remote imagery with some preprocessing techniques to extract hidden and fruitful information from them. Deep learning techniques applied by various researchers along with object detection, object recognition are also discussed here. This literature survey is also included a chronological review of work done related to detection and recognition using deep learning techniques.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2021, 15, 4; 3-17
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Artificial intelligence applications in project scheduling: a systematic review, bibliometric analysis, and prospects for future research
Autorzy:
Bahroun, Zied
Tanash, Moayad
Ad, Rami As
Alnajar, Mohamad
Powiązania:
https://bibliotekanauki.pl/articles/27315576.pdf
Data publikacji:
2023
Wydawca:
STE GROUP
Tematy:
artificial intelligence
machine learning
project scheduling
bibliometric analysis
network analysis
review
Opis:
The availability of digital infrastructures and the fast-paced development of accompanying revolutionary technologies have triggered an unprecedented reliance on Artificial intelligence (AI) techniques both in theory and practice. Within the AI domain, Machine Learning (ML) techniques stand out as essential facilitator largely enabling machines to possess human-like cognitive and decision making capabilities. This paper provides a focused review of the literature addressing applications of emerging ML toolsto solve various Project Scheduling Problems (PSPs). In particular, it employs bibliometric and network analysis tools along with a systematic literature review to analyze a pool of 104 papers published between 1985 and August 2021. The conducted analysis unveiled the top contributing authors, the most influential papers as well as the existing research tendencies and thematic research topics within this field of study. A noticeable growth in the number of relevant studies is seen recently with a steady increase as of the year 2018. Most of the studies adopted Artificial Neural Networks, Bayesian Network and Reinforcement Learning techniques to tackle PSPs under a stochastic environment, where these techniques are frequently hybridized with classical metaheuristics. The majority of works (57%) addressed basic Resource Constrained PSPs and only 15% are devoted to the project portfolio management problem. Furthermore, this study clearly indicates that the application of AI techniques to efficiently handle PSPs is still in its infancy stage bringing out the need for further research in this area. This work also identifies current research gaps and highlights a multitude of promising avenues for future research.
Źródło:
Management Systems in Production Engineering; 2023, 2 (31); 144--161
2299-0461
Pojawia się w:
Management Systems in Production Engineering
Dostawca treści:
Biblioteka Nauki
Artykuł

Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies