Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "multi-machine" wg kryterium: Temat


Wyświetlanie 1-7 z 7
Tytuł:
A real-valued genetic algorithm to optimize the parameters of support vector machine for classification of multiple faults in NPP
Autorzy:
Amer, F. Z.
El-Garhy, A. M.
Awadalla, M. H.
Rashad, S. M.
Abdien, A. K.
Powiązania:
https://bibliotekanauki.pl/articles/147652.pdf
Data publikacji:
2011
Wydawca:
Instytut Chemii i Techniki Jądrowej
Tematy:
support vector machine (SVM)
fault classification
multi fault classification
genetic algorithm (GA)
machine learning
Opis:
Two parameters, regularization parameter c, which determines the trade off cost between minimizing the training error and minimizing the complexity of the model and parameter sigma (σ) of the kernel function which defines the non-linear mapping from the input space to some high-dimensional feature space, which constructs a non-linear decision hyper surface in an input space, must be carefully predetermined in establishing an efficient support vector machine (SVM) model. Therefore, the purpose of this study is to develop a genetic-based SVM (GASVM) model that can automatically determine the optimal parameters, c and sigma, of SVM with the highest predictive accuracy and generalization ability simultaneously. The GASVM scheme is applied on observed monitored data of a pressurized water reactor nuclear power plant (PWRNPP) to classify its associated faults. Compared to the standard SVM model, simulation of GASVM indicates its superiority when applied on the dataset with unbalanced classes. GASVM scheme can gain higher classification with accurate and faster learning speed.
Źródło:
Nukleonika; 2011, 56, 4; 323-332
0029-5922
1508-5791
Pojawia się w:
Nukleonika
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A hybridization of machine learning and NSGA-II for multi-objective optimization of surface roughness and cutting force in AISI 4340 alloy steel turning
Autorzy:
Nguyen, Anh-Tu
Nguyen, Van-Hai
Le, Tien-Thinh
Nguyen, Nhu-Tung
Powiązania:
https://bibliotekanauki.pl/articles/2200263.pdf
Data publikacji:
2023
Wydawca:
Wrocławska Rada Federacji Stowarzyszeń Naukowo-Technicznych
Tematy:
multi-objective optimisation
machine learning
AISI 4340
NSGA-II
ANN
Opis:
This work focuses on optimizing process parameters in turning AISI 4340 alloy steel. A hybridization of Machine Learning (ML) algorithms and a Non-Dominated Sorting Genetic Algorithm (NSGA-II) is applied to find the Pareto solution. The objective functions are a simultaneous minimum of average surface roughness (Ra) and cutting force under the cutting parameter constraints of cutting speed, feed rate, depth of cut, and tool nose radius in a range of 50–375 m/min, 0.02–0.25 mm/rev, 0.1–1.5 mm, and 0.4–0.8 mm, respectively. The present study uses five ML models – namely SVR, CAT, RFR, GBR, and ANN – to predict Ra and cutting force. Results indicate that ANN offers the best predictive performance in respect of all accuracy metrics: root-mean-squared-error (RMSE), mean-absolute-error (MAE), and coefficient of determination (R2). In addition, a hybridization of NSGA-II and ANN is implemented to find the optimal solutions for machining parameters, which lie on the Pareto front. The results of this multi-objective optimization indicate that Ra lies in a range between 1.032 and 1.048 μm, and cutting force was found to range between 7.981 and 8.277 kgf for the five selected Pareto solutions. In the set of non-dominated keys, none of the individual solutions is superior to any of the others, so it is the manufacturer's decision which dataset to select. Results summarize the value range in the Pareto solutions generated by NSGA-II: cutting speeds between 72.92 and 75.11 m/min, a feed rate of 0.02 mm/rev, a depth of cut between 0.62 and 0.79 mm, and a tool nose radius of 0.4 mm, are recommended. Following that, experimental validations were finally conducted to verify the optimization procedure.
Źródło:
Journal of Machine Engineering; 2023, 23, 1; 133--153
1895-7595
2391-8071
Pojawia się w:
Journal of Machine Engineering
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Multi-view learning for software defect prediction
Autorzy:
Kiyak, Elife Ozturk
Birant, Derya
Birant, Kokten Ulas
Powiązania:
https://bibliotekanauki.pl/articles/2060905.pdf
Data publikacji:
2021
Wydawca:
Politechnika Wrocławska. Oficyna Wydawnicza Politechniki Wrocławskiej
Tematy:
software defect prediction
multi-view learning
machine learning
k-nearest neighbor
Opis:
Background: Traditionally, machine learning algorithms have been simply applied for software defect prediction by considering single-view data, meaning the input data contains a single feature vector. Nevertheless, different software engineering data sources may include multiple and partially independent information, which makes the standard single-view approaches ineffective. Objective: In order to overcome the single-view limitation in the current studies, this article proposes the usage of a multi-view learning method for software defect classification problems. Method: The Multi-View k-Nearest Neighbors (MVKNN) method was used in the software engineering field. In this method, first, base classifiers are constructed to learn from each view, and then classifiers are combined to create a robust multi-view model. Results: In the experimental studies, our algorithm (MVKNN) is compared with the standard k-nearest neighbors (KNN) algorithm on 50 datasets obtained from different software bug repositories. The experimental results demonstrate that the MVKNN method outperformed KNN on most of the datasets in terms of accuracy. The average accuracy values of MVKNN are 86.59%, 88.09%, and 83.10% for the NASA MDP, Softlab, and OSSP datasets, respectively. Conclusion: The results show that using multiple views (MVKNN) can usually improve classification accuracy compared to a single-view strategy (KNN) for software defect prediction.
Źródło:
e-Informatica Software Engineering Journal; 2021, 15, 1; 163--184
1897-7979
Pojawia się w:
e-Informatica Software Engineering Journal
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Developing a data-driven soft sensor to predict silicate impurity in iron ore flotation concentrate
Autorzy:
Pural, Yusuf Enes
Powiązania:
https://bibliotekanauki.pl/articles/24148677.pdf
Data publikacji:
2023
Wydawca:
Politechnika Wrocławska. Oficyna Wydawnicza Politechniki Wrocławskiej
Tematy:
soft sensor
machine learning
random forest
multi-layer perceptron
flotation
grade estimation
Opis:
Soft sensors are mathematical models that estimate the value of a process variable that is difficult or expensive to measure directly. They can be based on first principle models, data-based models, or a combination of both. These models are increasingly used in mineral processing to estimate and optimize important performance parameters such as mill load, mineral grades, and particle size. This study investigates the development of a data-driven soft sensor to predict the silicate content in iron ore reverse flotation concentrate, a crucial indicator of plant performance. The proposed soft sensor model employs a dataset obtained from Kaggle, which includes measurements of iron and silicate content in the feed to the plant, reagent dosages, weight and pH of pulp, as well as the amount of air and froth levels in the flotation units. To reduce the dimensionality of the dataset, Principal Component Analysis, an unsupervised machine learning method, was applied. The soft sensor model was developed using three machine learning algorithms, namely, Ridge Regression, Multi-Layer Perceptron, and Random Forest. The Random Forest model, created with non-reduced data, demonstrated superior performance, with an R-squared value of 96.5% and a mean absolute error of 0.089. The results suggest that the proposed soft sensor model can accurately predict the silicate content in the iron ore flotation concentrate using machine learning algorithms. Moreover, the study highlights the importance of selecting appropriate algorithms for soft sensor developments in mineral processing plants.
Źródło:
Physicochemical Problems of Mineral Processing; 2023, 59, 5; art. no. 169823
1643-1049
2084-4735
Pojawia się w:
Physicochemical Problems of Mineral Processing
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A rainfall forecasting method using machine learning models and its application to the Fukuoka city case
Autorzy:
Sumi, S. M.
Zaman, M. F.
Hirose, H.
Powiązania:
https://bibliotekanauki.pl/articles/331290.pdf
Data publikacji:
2012
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
maszyna ucząca się
metoda wielomodelowa
przetwarzanie wstępne
rainfall forecasting
machine learning
multi model method
preprocessing
model ranking
Opis:
In the present article, an attempt is made to derive optimal data-driven machine learning methods for forecasting an average daily and monthly rainfall of the Fukuoka city in Japan. This comparative study is conducted concentrating on three aspects: modelling inputs, modelling methods and pre-processing techniques. A comparison between linear correlation analysis and average mutual information is made to find an optimal input technique. For the modelling of the rainfall, a novel hybrid multi-model method is proposed and compared with its constituent models. The models include the artificial neural network, multivariate adaptive regression splines, the k-nearest neighbour, and radial basis support vector regression. Each of these methods is applied to model the daily and monthly rainfall, coupled with a pre-processing technique including moving average and principal component analysis. In the first stage of the hybrid method, sub-models from each of the above methods are constructed with different parameter settings. In the second stage, the sub-models are ranked with a variable selection technique and the higher ranked models are selected based on the leave-one-out cross-validation error. The forecasting of the hybrid model is performed by the weighted combination of the finally selected models.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2012, 22, 4; 841-854
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Application of machine learning to classify wear level of multi-piston displacement pump
Autorzy:
Iwaniec, Marek
Stojek, Jerzy
Powiązania:
https://bibliotekanauki.pl/articles/128216.pdf
Data publikacji:
2019
Wydawca:
Politechnika Poznańska. Instytut Mechaniki Stosowanej
Tematy:
machine learning
diagnostics
signal analysis
multi-piston pump
vibrations
uczenie maszynowe
diagnostyka
analiza sygnałów
pompa wielotłoczkowa
drgania
Opis:
This article specifies application of machine learning for the purpose of classifying wear level of multi-piston displacement pump. A diagnostic experiment that was carried out in order to acquire vibration signal matrices from selected locations within the pump body is described herein. Measured signals were subject to time and frequency analysis. Signal attributes related to time and frequency were grouped in a table in accordance with pump wear level. Subsequently, classification models for the pump wear level were developed through application of Matlab package. Assessment of their accuracy was carried out. A selected model was subject to confirmation. The article includes its summary.
Źródło:
Vibrations in Physical Systems; 2019, 30, 2; 1-14
0860-6897
Pojawia się w:
Vibrations in Physical Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Application of agent-based simulated annealing and tabu search procedures to solving the data reduction problem
Autorzy:
Czarnowski, I.
Jędrzejowicz, P.
Powiązania:
https://bibliotekanauki.pl/articles/907819.pdf
Data publikacji:
2011
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
redukcja danych
komputerowe uczenie się
optymalizacja
system wieloagentowy
data reduction
machine learning
A-Team
optimization
multi-agent system
Opis:
The problem considered concerns data reduction for machine learning. Data reduction aims at deciding which features and instances from the training set should be retained for further use during the learning process. Data reduction results in increased capabilities and generalization properties of the learning model and a shorter time of the learning process. It can also help in scaling up to large data sources. The paper proposes an agent-based data reduction approach with the learning process executed by a team of agents (A-Team). Several A-Team architectures with agents executing the simulated annealing and tabu search procedures are proposed and investigated. The paper includes a detailed description of the proposed approach and discusses the results of a validating experiment.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2011, 21, 1; 57-68
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-7 z 7

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies