Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "Resnet" wg kryterium: Temat


Wyświetlanie 1-7 z 7
Tytuł:
Usage of artificial neural networks in the diagnosis of knee joint disorders
Zastosowanie sztucznych sieci neuronowych w diagnozie schorzeń stawu kolanowego
Autorzy:
Witkowski, Konrad
Wieczorek, Mikołaj
Powiązania:
https://bibliotekanauki.pl/articles/27315456.pdf
Data publikacji:
2023
Wydawca:
Politechnika Lubelska. Wydawnictwo Politechniki Lubelskiej
Tematy:
classification
MRI images
Resnet
Alexnet
klasyfikacja
zdjęcia MRI
Opis:
Following article address the issue of automatic knee disorder diagnose with usage of neural networks. We proposed several hybrid neuralnet architectures which aim to successfully classify abnormalityusing MRI (magnetic resonance imaging) images acquired from publicly available dataset. To construct such combinations of modelswe used pretrainedAlexnet, Resnet18 and Resnet34 downloaded from Torchvision. Experiments showedthat for certain abnormalities our models can achieve up to 90% accuracy.
Niniejszy artykuł porusza temat automatycznej diagnozy uszkodzenia stawu kolanowego z zastosowaniem sieci neuronowych. Zaproponowanokilka hybrydowych sieci neuronowych, które podjęły próbę poprawnej klasyfikacji nieprawidłowości wykorzystując zdjęcia rezonansu magnetycznego pochodzące z publicznie dostępnego zbioru. Do konstrukcjikombinacji sieci skorzystanoz pretrenowanych modeli (Alexnet, Resnet18, Resnet34) pobranychz Torchvision. Eksperyment pokazał, że dla klasyfikacji niektórych schorzeń modele osiągnęły nawet 90% skuteczności.
Źródło:
Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska; 2023, 13, 4; 11--14
2083-0157
2391-6761
Pojawia się w:
Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Plant disease detection using ensembled CNN framework
Autorzy:
Mondal, Subhash
Banerjee, Suharta
Mukherjee, Subinoy
Sengupta, Diganta
Powiązania:
https://bibliotekanauki.pl/articles/27312905.pdf
Data publikacji:
2022
Wydawca:
Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie. Wydawnictwo AGH
Tematy:
convolutional neural network
disease detection
ResNet-50
VGG-19
InceptionV3
Opis:
Agriculture exhibits the prime driving force for the growth of agro-based economies globally. In agriculture, detecting and preventing crops from the attacks of pests is a primary concern in today’s world. The early detection of plant disease becomes necessary in order to avoid the degradation of the yield of crop production. In this paper, we propose an ensemble-based convolutional neural network (CNN) architecture that detects plant disease from the images of a plant’s leaves. The proposed architecture considers CNN architectures like VGG-19, ResNet-50, and InceptionV3 as its base models, and the prediction from these models is used as an input for our meta-model (Inception-ResNetV2). This approach helped us build a generalized model for disease detection with an accuracy of 97.9% under test conditions.
Źródło:
Computer Science; 2022, 23 (3); 321--333
1508-2806
2300-7036
Pojawia się w:
Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Advancing Chipboard Milling Process Monitoring through Spectrogram-Based Time Series Analysis with Convolutional Neural Network using Pretrained Networks
Autorzy:
Kurek, Jarosław
Szymanowski, Karol
Chmielewski, Leszek
Orłowski, Arkadiusz
Powiązania:
https://bibliotekanauki.pl/articles/27323142.pdf
Data publikacji:
2023
Wydawca:
Szkoła Główna Gospodarstwa Wiejskiego w Warszawie. Instytut Informatyki Technicznej
Tematy:
convolutional neural networks
CNN
vgg16
vgg19
resnet34
tool state monitoring
chipboard milling
Opis:
This paper presents a novel approach to enhance chipboard milling process monitoring in the furniture manufacturing sector using Convolutional Neural Networks (CNNs) with pretrained architectures like VGG16, VGG19, and RESNET34. The study leverages spectrogram representations of time-series data obtained during the milling process, providing a unique perspective on tool condition monitoring. The efficiency of the CNN models in accurately classifying tool conditions into distinct states (‘Green’, ‘Yellow’, and ‘Red’) based on wear levels is thoroughly evaluated. Experimental results demonstrate that VGG16 and VGG19 achieve high accuracy, however with longer training times, while RESNET34 offers faster training at the cost of reduced precision. This research not only highlights the potential of pretrained CNNs in industrial applications but also opens new avenues for predictive maintenance and quality control in manufacturing, underscoring the broader applicability of AI in industrial automation and monitoring systems.
Źródło:
Machine Graphics & Vision; 2023, 32, 2; 89--108
1230-0535
2720-250X
Pojawia się w:
Machine Graphics & Vision
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Segmentation of bone structures with the use of deep learning techniques
Autorzy:
Krawczyk, Zuzanna
Starzyński, Jacek
Powiązania:
https://bibliotekanauki.pl/articles/2173574.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
deep learning
semantic segmentation
U-net
FCN
ResNet
computed tomography
technika deep learning
głęboka nauka
segmentacja semantyczna
tomografia komputerowa
Opis:
The paper is focused on automatic segmentation task of bone structures out of CT data series of pelvic region. The authors trained and compared four different models of deep neural networks (FCN, PSPNet, U-net and Segnet) to perform the segmentation task of three following classes: background, patient outline and bones. The mean and class-wise Intersection over Union (IoU), Dice coefficient and pixel accuracy measures were evaluated for each network outcome. In the initial phase all of the networks were trained for 10 epochs. The most exact segmentation results were obtained with the use of U-net model, with mean IoU value equal to 93.2%. The results where further outperformed with the U-net model modification with ResNet50 model used as the encoder, trained by 30 epochs, which obtained following result: mIoU measure – 96.92%, “bone” class IoU – 92.87%, mDice coefficient – 98.41%, mDice coefficient for “bone” – 96.31%, mAccuracy – 99.85% and Accuracy for “bone” class – 99.92%.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2021, 69, 3; art. no. e136751
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Segmentation of bone structures with the use of deep learning techniques
Autorzy:
Krawczyk, Zuzanna
Starzyński, Jacek
Powiązania:
https://bibliotekanauki.pl/articles/2128158.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
deep learning
semantic segmentation
U-net
FCN
ResNet
computed tomography
technika deep learning
głęboka nauka
segmentacja semantyczna
tomografia komputerowa
Opis:
The paper is focused on automatic segmentation task of bone structures out of CT data series of pelvic region. The authors trained and compared four different models of deep neural networks (FCN, PSPNet, U-net and Segnet) to perform the segmentation task of three following classes: background, patient outline and bones. The mean and class-wise Intersection over Union (IoU), Dice coefficient and pixel accuracy measures were evaluated for each network outcome. In the initial phase all of the networks were trained for 10 epochs. The most exact segmentation results were obtained with the use of U-net model, with mean IoU value equal to 93.2%. The results where further outperformed with the U-net model modification with ResNet50 model used as the encoder, trained by 30 epochs, which obtained following result: mIoU measure – 96.92%, “bone” class IoU – 92.87%, mDice coefficient – 98.41%, mDice coefficient for “bone” – 96.31%, mAccuracy – 99.85% and Accuracy for “bone” class – 99.92%.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2021, 69, 3; e136751, 1--8
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
The automatic focus segmentation of multi-focus image fusion
Autorzy:
Hawari, K.
Ismail
Powiązania:
https://bibliotekanauki.pl/articles/2173548.pdf
Data publikacji:
2022
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
deep learning
ResNet50
multifocus image fusion
głęboka nauka
wieloogniskowa fuzja obrazu
Opis:
Multi-focus image fusion is a method of increasing the image quality and preventing image redundancy. It is utilized in many fields such as medical diagnostic, surveillance, and remote sensing. There are various algorithms available nowadays. However, a common problem is still there, i.e. the method is not sufficient to handle the ghost effect and unpredicted noises. Computational intelligence has developed quickly over recent decades, followed by the rapid development of multi-focus image fusion. The proposed method is multi-focus image fusion based on an automatic encoder-decoder algorithm. It uses deeplabV3+ architecture. During the training process, it uses a multi-focus dataset and ground truth. Then, the model of the network is constructed through the training process. This model was adopted in the testing process of sets to predict the focus map. The testing process is semantic focus processing. Lastly, the fusion process involves a focus map and multi-focus images to configure the fused image. The results show that the fused images do not contain any ghost effects or any unpredicted tiny objects. The assessment metric of the proposed method uses two aspects. The first is the accuracy of predicting a focus map, the second is an objective assessment of the fused image such as mutual information, SSIM, and PSNR indexes. They show a high score of precision and recall. In addition, the indexes of SSIM, PSNR, and mutual information are high. The proposed method also has more stable performance compared with other methods. Finally, the Resnet50 model algorithm in multi-focus image fusion can handle the ghost effect problem well.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2022, 70, 1; e140352, 1--8
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Hybrid deep learning model-based prediction of images related to cyberbullying
Autorzy:
Elmezain, Mahmoud
Malki, Amer
Gad, Ibrahim
Atlam, El-Sayed
Powiązania:
https://bibliotekanauki.pl/articles/2142490.pdf
Data publikacji:
2022
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
cyberbullying
ResNet50
MobileNetV2
support vector machine
cyberprzemoc
maszyna wektorów wsparcia
Opis:
Cyberbullying has become more widespread as a result of the common use of social media, particularly among teenagers and young people. A lack of studies on the types of advice and support available to victims of bullying has a negative impact on individuals and society. This work proposes a hybrid model based on transformer models in conjunction with a support vector machine (SVM) to classify our own data set images. First, seven different convolutional neural network architectures are employed to decide which is best in terms of results. Second, feature extraction is performed using four top models, namely, ResNet50, EfficientNetB0, MobileNet and Xception architectures. In addition, each architecture extracts the same number of features as the number of images in the data set, and these features are concatenated. Finally, the features are optimized and then provided as input to the SVM classifier. The accuracy rate of the proposed merged models with the SVM classifier achieved 96.05%. Furthermore, the classification precision of the proposed merged model is 99% in the bullying class and 93% in the non-bullying class. According to these results, bullying has a negative impact on students’ academic performance. The results help stakeholders to take necessary measures against bullies and increase the community’s awareness of this phenomenon.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2022, 32, 2; 323--334
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-7 z 7

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies