Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "Machine Learning" wg kryterium: Temat


Tytuł:
2D Cadastral Coordinate Transformation using extreme learning machine technique
Autorzy:
Ziggah, Y. Y.
Issaka, Y.
Laari, P. B.
Hui, Z.
Powiązania:
https://bibliotekanauki.pl/articles/145372.pdf
Data publikacji:
2018
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
transformacja współrzędnych
sieci neuronowe
dane geodezyjne
sieć radialna
coordinate transformation
extreme learning machine
backpropagation neural network
radial basis function neural network
geodetic datum
Opis:
Land surveyors, photogrammetrists, remote sensing engineers and professionals in the Earth sciences are often faced with the task of transferring coordinates from one geodetic datum into another to serve their desired purpose. The essence is to create compatibility between data related to different geodetic reference frames for geospatial applications. Strictly speaking, conventional techniques of conformal, affine and projective transformation models are mostly used to accomplish such task. With developing countries like Ghana where there is no immediate plans to establish geocentric datum and still rely on the astro-geodetic datums as it national mapping reference surface, there is the urgent need to explore the suitability of other transformation methods. In this study, an effort has been made to explore the proficiency of the Extreme Learning Machine (ELM) as a novel alternative coordinate transformation method. The proposed ELM approach was applied to data found in the Ghana geodetic reference network. The ELM transformation result has been analysed and compared with benchmark methods of backpropagation neural network (BPNN), radial basis function neural network (RBFNN), two-dimensional (2D) affine and 2D conformal. The overall study results indicate that the ELM can produce comparable transformation results to the widely used BPNN and RBFNN, but better than the 2D affine and 2D conformal. The results produced by ELM has demonstrated it as a promising tool for coordinate transformation in Ghana.
Źródło:
Geodesy and Cartography; 2018, 67, 2; 321-343
2080-6736
2300-2581
Pojawia się w:
Geodesy and Cartography
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A classification of real time analytics methods. An outlook for the use within the smart factory
Autorzy:
Trinks, S.
Powiązania:
https://bibliotekanauki.pl/articles/321330.pdf
Data publikacji:
2018
Wydawca:
Politechnika Śląska. Wydawnictwo Politechniki Śląskiej
Tematy:
real time analytics
smart factory
Industry 4.0
smart manufacturing
internet of things
machine learning
analiza w czasie rzeczywistym
inteligentna fabryka
Przemysł 4.0
inteligentna produkcja
Internet rzeczy
uczenie maszynowe
Opis:
The creation of value in a factory is transforming. The spread of sensors, embedded systems, and the development of the Internet of Things (IoT) creates a multitude of possibilities relating to upcoming Real Time Analytics (RTA) application. However, already the topic of big data had rendered the use of analytical solutions related to a processing in real time. Now, the introduced methods and concepts can be transferred into the industrial area. This paper deals with the topic of the current state of RTA having the objective to identify applied methods. In addition, the paper also includes a classification of these methods and contains an outlook for the use of them within the area of the smart factory.
Źródło:
Zeszyty Naukowe. Organizacja i Zarządzanie / Politechnika Śląska; 2018, 119; 313-329
1641-3466
Pojawia się w:
Zeszyty Naukowe. Organizacja i Zarządzanie / Politechnika Śląska
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A comparative study of corporate credit ratings prediction with machine learning
Autorzy:
Doğan, Seyyide
Büyükkör, Yasin
Atan, Murat
Powiązania:
https://bibliotekanauki.pl/articles/2175830.pdf
Data publikacji:
2022
Wydawca:
Politechnika Wrocławska. Oficyna Wydawnicza Politechniki Wrocławskiej
Tematy:
credit rating
credit risk
machine learning
Opis:
Credit scores are critical for financial sector investors and government officials, so it is important to develop reliable, transparent and appropriate tools for obtaining ratings. This study aims to predict company credit scores with machine learning and modern statistical methods, both in sectoral and aggregated data. Analyses are made on 1881 companies operating in three different sectors that applied for loans from Turkey’s largest public bank. The results of the experiment are compared in terms of classification accuracy, sensitivity, specificity, precision and Mathews correlation coefficient. When the credit ratings are estimated on a sectoral basis, it is observed that the classification rate considerably changes. Considering the analysis results, it is seen that logistic regression analysis, support vector machines, random forest and XGBoost have better performance than decision tree and k-nearest neighbour for all data sets.
Źródło:
Operations Research and Decisions; 2022, 32, 1; 25--47
2081-8858
2391-6060
Pojawia się w:
Operations Research and Decisions
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A comparative study on performance of basic and ensemble classifiers with various datasets
Autorzy:
Gunakala, Archana
Shahid, Afzal Hussain
Powiązania:
https://bibliotekanauki.pl/articles/30148255.pdf
Data publikacji:
2023
Wydawca:
Polskie Towarzystwo Promocji Wiedzy
Tematy:
classification
Naïve Bayes
neural network
Support Vector Machine
Decision Tree
ensemble learning
Random Forest
Opis:
Classification plays a critical role in machine learning (ML) systems for processing images, text and high -dimensional data. Predicting class labels from training data is the primary goal of classification. An optimal model for a particular classification problem is chosen based on the model's performance and execution time. This paper compares and analyzes the performance of basic as well as ensemble classifiers utilizing 10-fold cross validation and also discusses their essential concepts, advantages, and disadvantages. In this study five basic classifiers namely Naïve Bayes (NB), Multi-layer Perceptron (MLP), Support Vector Machine (SVM), Decision Tree (DT), and Random Forest (RF) and the ensemble of all the five classifiers along with few more combinations are compared with five University of California Irvine (UCI) ML Repository datasets and a Diabetes Health Indicators dataset from Kaggle repository. To analyze and compare the performance of classifiers, evaluation metrics like Accuracy, Recall, Precision, Area Under Curve (AUC) and F-Score are used. Experimental results showed that SVM performs best on two out of the six datasets (Diabetes Health Indicators and waveform), RF performs best for Arrhythmia, Sonar, Tic-tac-toe datasets, and the best ensemble combination is found to be DT+SVM+RF on Ionosphere dataset having respective accuracies 72.58%, 90.38%, 81.63%, 73.59%, 94.78% and 94.01%. The proposed ensemble combinations outperformed the conven¬tional models for few datasets.
Źródło:
Applied Computer Science; 2023, 19, 1; 107-132
1895-3735
2353-6977
Pojawia się w:
Applied Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A comparison of conventional and deep learning methods of image classification
Porównanie metod klasycznego i głębokiego uczenia maszynowego w klasyfikacji obrazów
Autorzy:
Dovbnych, Maryna
Plechawska-Wójcik, Małgorzata
Powiązania:
https://bibliotekanauki.pl/articles/2055127.pdf
Data publikacji:
2021
Wydawca:
Politechnika Lubelska. Instytut Informatyki
Tematy:
image classification
machine learning
deep learning
neural networks
klasyfikacja obrazów
uczenie maszynowe
uczenie głębokie
sieci neuronowe
Opis:
The aim of the research is to compare traditional and deep learning methods in image classification tasks. The conducted research experiment covers the analysis of five different models of neural networks: two models of multi–layer perceptron architecture: MLP with two hidden layers, MLP with three hidden layers; and three models of convolutional architecture: the three VGG blocks model, AlexNet and GoogLeNet. The models were tested on two different datasets: CIFAR–10 and MNIST and have been applied to the task of image classification. They were tested for classification performance, training speed, and the effect of the complexity of the dataset on the training outcome.
Celem badań jest porównanie metod klasycznego i głębokiego uczenia w zadaniach klasyfikacji obrazów. Przeprowa-dzony eksperyment badawczy obejmuje analizę pięciu różnych modeli sieci neuronowych: dwóch modeli wielowar-stwowej architektury perceptronowej: MLP z dwiema warstwami ukrytymi, MLP z trzema warstwami ukrytymi; oraz trzy modele architektury konwolucyjnej: model z trzema VGG blokami, AlexNet i GoogLeNet. Modele przetrenowano na dwóch różnych zbiorach danych: CIFAR–10 i MNIST i zastosowano w zadaniu klasyfikacji obrazów. Zostały one zbadane pod kątem wydajności klasyfikacji, szybkości trenowania i wpływu złożoności zbioru danych na wynik trenowania.
Źródło:
Journal of Computer Sciences Institute; 2021, 21; 303--308
2544-0764
Pojawia się w:
Journal of Computer Sciences Institute
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A Comprehensive study: - Sarcasm detection in sentimental analysis
Autorzy:
Ratawal, Yamini
Tayal, Devendra
Powiązania:
https://bibliotekanauki.pl/articles/1159725.pdf
Data publikacji:
2018
Wydawca:
Przedsiębiorstwo Wydawnictw Naukowych Darwin / Scientific Publishing House DARWIN
Tematy:
Sentimental analysis
Web mining
deep learning
machine learning
opinion mining
text mining
Opis:
Sarcasm detection is one of the active research area in sentimental analysis. However this paper talks about one of the recent issue in sentimental analysis that us sarcasm detection. In our work, we have described different techniques used in sarcasm detection that helps a novice researcher in efficient way. This paper represent different methodologies of carrying out research in this field.
Źródło:
World Scientific News; 2018, 113; 1-9
2392-2192
Pojawia się w:
World Scientific News
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A comprehensive study on the application of firefly algorithm in prediction of energy dissipation on block ramps
Autorzy:
Mahdavi-Meymand, Amin
Sulisz, Wojciech
Zounemat-Kermani, Mohammad
Powiązania:
https://bibliotekanauki.pl/articles/2087026.pdf
Data publikacji:
2022
Wydawca:
Polska Akademia Nauk. Polskie Naukowo-Techniczne Towarzystwo Eksploatacyjne PAN
Tematy:
firefly algorithm
machine learning
energy dissipation
block ramp
Opis:
In this study novel integrative machine learning models embedded with the firefly algorithm (FA) were developed and employed to predict energy dissipation on block ramps. The used models include multi-layer perceptron neural network (MLPNN), adaptive neuro-fuzzy inference system (ANFIS), group method of data handling (GMDH), support vector regression (SVR), linear equation (LE), and nonlinear regression equation (NE). The investigation focused on the evaluation of the performance of standard and integrative models in different runs. The performances of machine learning models and the nonlinear equation are higher than the linear equation. The results also show that FA increases the performance of all applied models. Moreover, the results indicate that the ANFIS-FA is the most stable integrative model in comparison to the other embedded methods and reveal that GMDH and SVR are the most stable technique among all applied models. The results also show that the accuracy of the LE-FA technique is relatively low, RMSE=0.091. The most accurate results provide SVR-FA, RMSE=0.034.
Źródło:
Eksploatacja i Niezawodność; 2022, 24, 2; 200--210
1507-2711
Pojawia się w:
Eksploatacja i Niezawodność
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A cough-based COVID-19 detection system using PCA and machine learning classifiers
Autorzy:
Benmalek, Elmehdi
Mhamdi, Jamal El
Jilbab, Abdelilah
Jbari, Atman
Powiązania:
https://bibliotekanauki.pl/articles/38431179.pdf
Data publikacji:
2022
Wydawca:
Polskie Towarzystwo Promocji Wiedzy
Tematy:
COVID-19
cough recordings
machine learning
PCA
classification
Opis:
In 2019, the whole world is facing a health emergency due to the emergence of the coronavirus (COVID-19). About 223 countries are affected by the coronavirus. Medical and health services face difficulties to manage the disease, which requires a significant amount of health system resources. Several artificial intelligence-based systems are designed to automatically detect COVID-19 for limiting the spread of the virus. Researchers have found that this virus has a major impact on voice production due to the respiratory system's dysfunction. In this paper, we investigate and analyze the effectiveness of cough analysis to accurately detect COVID-19. To do so, we per-formed binary classification, distinguishing positive COVID patients from healthy controls. The records are collected from the Coswara Dataset, a crowdsourcing project from the Indian Institute of Science (IIS). After data collection, we extracted the MFCC from the cough records. These acoustic features are mapped directly to the Decision Tree (DT), k-nearest neighbor (kNN) for k equals to 3, support vector machine (SVM), and deep neural network (DNN), or after a dimensionality reduction using principal component analysis (PCA), with 95 percent variance or 6 principal components. The 3NN classifier with all features has produced the best classification results. It detects COVID-19 patients with an accuracy of 97.48 percent, 96.96 percent f1-score, and 0.95 MCC. Suggesting that this method can accurately distinguish healthy controls and COVID-19 patients.
Źródło:
Applied Computer Science; 2022, 18, 4; 96-115
1895-3735
2353-6977
Pojawia się w:
Applied Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A cough-based COVID-19 detection with gammatone and Mel-frequency cepstral coefficients
Autorzy:
Benmalek, Elmehdi
El Mhamdi, Jamal
Jilbab, Abdelilah
Jbari, Atman
Powiązania:
https://bibliotekanauki.pl/articles/2203646.pdf
Data publikacji:
2023
Wydawca:
Polska Akademia Nauk. Polskie Towarzystwo Diagnostyki Technicznej PAN
Tematy:
COVID-19
cough recordings
machine learning
mel-frequency cepstral coefficients
gammatone cepstral coefficients
feature selection
uczenie maszynowe
współczynniki mel-cepstralne
Opis:
Many countries have adopted a public health approach that aims to address the particular challenges faced during the pandemic Coronavirus disease 2019 (COVID-19). Researchers mobilized to manage and limit the spread of the virus, and multiple artificial intelligence-based systems are designed to automatically detect the disease. Among these systems, voice-based ones since the virus have a major impact on voice production due to the respiratory system's dysfunction. In this paper, we investigate and analyze the effectiveness of cough analysis to accurately detect COVID-19. To do so, we distinguished positive COVID patients from healthy controls. After the gammatone cepstral coefficients (GTCC) and the Mel-frequency cepstral coefficients (MFCC) extraction, we have done the feature selection (FS) and classification with multiple machine learning algorithms. By combining all features and the 3-nearest neighbor (3NN) classifier, we achieved the highest classification results. The model is able to detect COVID-19 patients with accuracy and an f1-score above 98 percent. When applying FS, the higher accuracy and F1-score were achieved by the same model and the ReliefF algorithm, we lose 1 percent of accuracy by mapping only 12 features instead of the original 53.
Źródło:
Diagnostyka; 2023, 24, 2; art. no. 2023214
1641-6414
2449-5220
Pojawia się w:
Diagnostyka
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A Deep-Learning-Based Bug Priority Prediction Using RNN-LSTM Neural Networks
Autorzy:
Bani-Salameh, Hani
Sallam, Mohammed
Al shboul, Bashar
Powiązania:
https://bibliotekanauki.pl/articles/1818480.pdf
Data publikacji:
2021
Wydawca:
Politechnika Wrocławska. Oficyna Wydawnicza Politechniki Wrocławskiej
Tematy:
assigning
priority
bug tracking systems
bug priority
bug severity
closed-source
data mining
machine learning
ML
deep learning
RNN-LSTM
SVM
KNN
Opis:
Context: Predicting the priority of bug reports is an important activity in software maintenance. Bug priority refers to the order in which a bug or defect should be resolved. A huge number of bug reports are submitted every day. Manual filtering of bug reports and assigning priority to each report is a heavy process, which requires time, resources, and expertise. In many cases mistakes happen when priority is assigned manually, which prevents the developers from finishing their tasks, fixing bugs, and improve the quality. Objective: Bugs are widespread and there is a noticeable increase in the number of bug reports that are submitted by the users and teams’ members with the presence of limited resources, which raises the fact that there is a need for a model that focuses on detecting the priority of bug reports, and allows developers to find the highest priority bug reports. This paper presents a model that focuses on predicting and assigning a priority level (high or low) for each bug report. Method: This model considers a set of factors (indicators) such as component name, summary, assignee, and reporter that possibly affect the priority level of a bug report. The factors are extracted as features from a dataset built using bug reports that are taken from closed-source projects stored in the JIRA bug tracking system, which are used then to train and test the framework. Also, this work presents a tool that helps developers to assign a priority level for the bug report automatically and based on the LSTM’s model prediction. Results: Our experiments consisted of applying a 5-layer deep learning RNN-LSTM neural network and comparing the results with Support Vector Machine (SVM) and K-nearest neighbors (KNN) to predict the priority of bug reports. The performance of the proposed RNN-LSTM model has been analyzed over the JIRA dataset with more than 2000 bug reports. The proposed model has been found 90% accurate in comparison with KNN (74%) and SVM (87%). On average, RNN-LSTM improves the F-measure by 3% compared to SVM and 15.2% compared to KNN. Conclusion: It concluded that LSTM predicts and assigns the priority of the bug more accurately and effectively than the other ML algorithms (KNN and SVM). LSTM significantly improves the average F-measure in comparison to the other classifiers. The study showed that LSTM reported the best performance results based on all performance measures (Accuracy = 0.908, AUC = 0.95, F-measure = 0.892).
Źródło:
e-Informatica Software Engineering Journal; 2021, 15, 1; 29--45
1897-7979
Pojawia się w:
e-Informatica Software Engineering Journal
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A distributed big data analytics model for traffic accidents classification and recognition based on SparkMlLib cores
Autorzy:
Mallahi, Imad El
Riffi, Jamal
Tairi, Hamid
Ez-Zahout, Abderrahmane
Mahraz, Mohamed Adnane
Powiązania:
https://bibliotekanauki.pl/articles/27314355.pdf
Data publikacji:
2022
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
big data
machine learning
traffic accident
severity prediction
convolutional neural network
Opis:
This paper focuses on the issue of big data analytics for traffic accident prediction based on SparkMllib cores; however, Spark’s Machine Learning Pipelines provide a helpful and suitable API that helps to create and tune classification and prediction models to decision-making concerning traffic accidents. Data scientists have recently focused on classification and prediction techniques for traffic accidents; data analytics techniques for feature extraction have also continued to evolve. Analysis of a huge volume of received data requires considerable processing time. Practically, the implementation of such processes in real-time systems requires a high computation speed. Processing speed plays an important role in traffic accident recognition in real-time systems. It requires the use of modern technologies and fast algorithms that increase the acceleration in extracting the feature parameters from traffic accidents. Problems with overclocking during the digital processing of traffic accidents have yet to be completely resolved. Our proposed model is based on advanced processing by the Spark MlLib core. We call on the real-time data streaming API on spark to continuously gather real-time data from multiple external data sources in the form of data streams. Secondly, the data streams are treated as unbound tables. After this, we call the random forest algorithm continuously to extract the feature parameters from a traffic accident. The use of this proposed method makes it possible to increase the speed factor on processors. Experiment results showed that the proposed method successfully extracts the accident features and achieves a seamless classification performance compared to other conventional traffic accident recognition algorithms. Finally, we share all detected accidents with details onto online applications with other users.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2022, 16, 4; 62--71
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A fast neural network learning algorithm with approximate singular value decomposition
Autorzy:
Jankowski, Norbert
Linowiecki, Rafał
Powiązania:
https://bibliotekanauki.pl/articles/330870.pdf
Data publikacji:
2019
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
Moore–Penrose pseudoinverse
radial basis function network
extreme learning machine
kernel method
machine learning
singular value decomposition
deep extreme learning
principal component analysis
pseudoodwrotność Moore–Penrose
radialna funkcja bazowa
maszyna uczenia ekstremalnego
uczenie maszynowe
analiza składników głównych
Opis:
The learning of neural networks is becoming more and more important. Researchers have constructed dozens of learning algorithms, but it is still necessary to develop faster, more flexible, or more accurate learning algorithms. With fast learning we can examine more learning scenarios for a given problem, especially in the case of meta-learning. In this article we focus on the construction of a much faster learning algorithm and its modifications, especially for nonlinear versions of neural networks. The main idea of this algorithm lies in the usage of fast approximation of the Moore–Penrose pseudo-inverse matrix. The complexity of the original singular value decomposition algorithm is O(mn2). We consider algorithms with a complexity of O(mnl), where l < n and l is often significantly smaller than n. Such learning algorithms can be applied to the learning of radial basis function networks, extreme learning machines or deep ELMs, principal component analysis or even missing data imputation.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2019, 29, 3; 581-594
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A High-Accuracy of Transmission Line Faults (TLFs) Classification Based on Convolutional Neural Network
Autorzy:
Fuada, S.
Shiddieqy, H. A.
Adiono, T.
Powiązania:
https://bibliotekanauki.pl/articles/1844462.pdf
Data publikacji:
2020
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
fault detection
fault classification
transmission lines
convolutional neural network
machine learning
Opis:
To improve power system reliability, a protection mechanism is highly needed. Early detection can be used to prevent failures in the power transmission line (TL). A classification system method is widely used to protect against false detection as well as assist the decision analysis. Each TL signal has a continuous pattern in which it can be detected and classified by the conventional methods, i.e., wavelet feature extraction and artificial neural network (ANN). However, the accuracy resulting from these mentioned models is relatively low. To overcome this issue, we propose a machine learning-based on Convolutional Neural Network (CNN) for the transmission line faults (TLFs) application. CNN is more suitable for pattern recognition compared to conventional ANN and ANN with Discrete Wavelet Transform (DWT) feature extraction. In this work, we first simulate our proposed model by using Simulink® and Matlab®. This simulation generates a fault signal dataset, which is divided into 45.738 data training and 4.752 data tests. Later, we design the number of machine learning classifiers. Each model classifier is trained by exposing it to the same dataset. The CNN design, with raw input, is determined as an optimal output model from the training process with 100% accuracy.
Źródło:
International Journal of Electronics and Telecommunications; 2020, 66, 4; 655-664
2300-1933
Pojawia się w:
International Journal of Electronics and Telecommunications
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A hybrid approach to dimension reduction in classification
Autorzy:
Krawczak, M.
Szkatuła, G.
Powiązania:
https://bibliotekanauki.pl/articles/206425.pdf
Data publikacji:
2011
Wydawca:
Polska Akademia Nauk. Instytut Badań Systemowych PAN
Tematy:
data series
dimension reduction
envelopes
essential attributes
heteroassociation
machine learning from examples
decision rules
classification
Opis:
In this paper we introduce a hybrid approach to data series classification. The approach is based on the concept of aggregated upper and lower envelopes, and the principal components here called 'essential attributes', generated by multilayer neural networks. The essential attributes are represented by outputs of hidden layer neurons. Next, the real valued essential attributes are nominalized and symbolic data series representation is obtained. The symbolic representation is used to generate decision rules in the IF. . . THEN. . . form for data series classification. The approach reduces the dimension of data series. The efficiency of the approach was verified by considering numerical examples.
Źródło:
Control and Cybernetics; 2011, 40, 2; 527-551
0324-8569
Pojawia się w:
Control and Cybernetics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A hybridization of machine learning and NSGA-II for multi-objective optimization of surface roughness and cutting force in AISI 4340 alloy steel turning
Autorzy:
Nguyen, Anh-Tu
Nguyen, Van-Hai
Le, Tien-Thinh
Nguyen, Nhu-Tung
Powiązania:
https://bibliotekanauki.pl/articles/2200263.pdf
Data publikacji:
2023
Wydawca:
Wrocławska Rada Federacji Stowarzyszeń Naukowo-Technicznych
Tematy:
multi-objective optimisation
machine learning
AISI 4340
NSGA-II
ANN
Opis:
This work focuses on optimizing process parameters in turning AISI 4340 alloy steel. A hybridization of Machine Learning (ML) algorithms and a Non-Dominated Sorting Genetic Algorithm (NSGA-II) is applied to find the Pareto solution. The objective functions are a simultaneous minimum of average surface roughness (Ra) and cutting force under the cutting parameter constraints of cutting speed, feed rate, depth of cut, and tool nose radius in a range of 50–375 m/min, 0.02–0.25 mm/rev, 0.1–1.5 mm, and 0.4–0.8 mm, respectively. The present study uses five ML models – namely SVR, CAT, RFR, GBR, and ANN – to predict Ra and cutting force. Results indicate that ANN offers the best predictive performance in respect of all accuracy metrics: root-mean-squared-error (RMSE), mean-absolute-error (MAE), and coefficient of determination (R2). In addition, a hybridization of NSGA-II and ANN is implemented to find the optimal solutions for machining parameters, which lie on the Pareto front. The results of this multi-objective optimization indicate that Ra lies in a range between 1.032 and 1.048 μm, and cutting force was found to range between 7.981 and 8.277 kgf for the five selected Pareto solutions. In the set of non-dominated keys, none of the individual solutions is superior to any of the others, so it is the manufacturer's decision which dataset to select. Results summarize the value range in the Pareto solutions generated by NSGA-II: cutting speeds between 72.92 and 75.11 m/min, a feed rate of 0.02 mm/rev, a depth of cut between 0.62 and 0.79 mm, and a tool nose radius of 0.4 mm, are recommended. Following that, experimental validations were finally conducted to verify the optimization procedure.
Źródło:
Journal of Machine Engineering; 2023, 23, 1; 133--153
1895-7595
2391-8071
Pojawia się w:
Journal of Machine Engineering
Dostawca treści:
Biblioteka Nauki
Artykuł

Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies