Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "Resnet" wg kryterium: Temat


Wyświetlanie 1-3 z 3
Tytuł:
The automatic focus segmentation of multi-focus image fusion
Autorzy:
Hawari, K.
Ismail
Powiązania:
https://bibliotekanauki.pl/articles/2173548.pdf
Data publikacji:
2022
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
deep learning
ResNet50
multifocus image fusion
głęboka nauka
wieloogniskowa fuzja obrazu
Opis:
Multi-focus image fusion is a method of increasing the image quality and preventing image redundancy. It is utilized in many fields such as medical diagnostic, surveillance, and remote sensing. There are various algorithms available nowadays. However, a common problem is still there, i.e. the method is not sufficient to handle the ghost effect and unpredicted noises. Computational intelligence has developed quickly over recent decades, followed by the rapid development of multi-focus image fusion. The proposed method is multi-focus image fusion based on an automatic encoder-decoder algorithm. It uses deeplabV3+ architecture. During the training process, it uses a multi-focus dataset and ground truth. Then, the model of the network is constructed through the training process. This model was adopted in the testing process of sets to predict the focus map. The testing process is semantic focus processing. Lastly, the fusion process involves a focus map and multi-focus images to configure the fused image. The results show that the fused images do not contain any ghost effects or any unpredicted tiny objects. The assessment metric of the proposed method uses two aspects. The first is the accuracy of predicting a focus map, the second is an objective assessment of the fused image such as mutual information, SSIM, and PSNR indexes. They show a high score of precision and recall. In addition, the indexes of SSIM, PSNR, and mutual information are high. The proposed method also has more stable performance compared with other methods. Finally, the Resnet50 model algorithm in multi-focus image fusion can handle the ghost effect problem well.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2022, 70, 1; e140352, 1--8
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Segmentation of bone structures with the use of deep learning techniques
Autorzy:
Krawczyk, Zuzanna
Starzyński, Jacek
Powiązania:
https://bibliotekanauki.pl/articles/2128158.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
deep learning
semantic segmentation
U-net
FCN
ResNet
computed tomography
technika deep learning
głęboka nauka
segmentacja semantyczna
tomografia komputerowa
Opis:
The paper is focused on automatic segmentation task of bone structures out of CT data series of pelvic region. The authors trained and compared four different models of deep neural networks (FCN, PSPNet, U-net and Segnet) to perform the segmentation task of three following classes: background, patient outline and bones. The mean and class-wise Intersection over Union (IoU), Dice coefficient and pixel accuracy measures were evaluated for each network outcome. In the initial phase all of the networks were trained for 10 epochs. The most exact segmentation results were obtained with the use of U-net model, with mean IoU value equal to 93.2%. The results where further outperformed with the U-net model modification with ResNet50 model used as the encoder, trained by 30 epochs, which obtained following result: mIoU measure – 96.92%, “bone” class IoU – 92.87%, mDice coefficient – 98.41%, mDice coefficient for “bone” – 96.31%, mAccuracy – 99.85% and Accuracy for “bone” class – 99.92%.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2021, 69, 3; e136751, 1--8
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Segmentation of bone structures with the use of deep learning techniques
Autorzy:
Krawczyk, Zuzanna
Starzyński, Jacek
Powiązania:
https://bibliotekanauki.pl/articles/2173574.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
deep learning
semantic segmentation
U-net
FCN
ResNet
computed tomography
technika deep learning
głęboka nauka
segmentacja semantyczna
tomografia komputerowa
Opis:
The paper is focused on automatic segmentation task of bone structures out of CT data series of pelvic region. The authors trained and compared four different models of deep neural networks (FCN, PSPNet, U-net and Segnet) to perform the segmentation task of three following classes: background, patient outline and bones. The mean and class-wise Intersection over Union (IoU), Dice coefficient and pixel accuracy measures were evaluated for each network outcome. In the initial phase all of the networks were trained for 10 epochs. The most exact segmentation results were obtained with the use of U-net model, with mean IoU value equal to 93.2%. The results where further outperformed with the U-net model modification with ResNet50 model used as the encoder, trained by 30 epochs, which obtained following result: mIoU measure – 96.92%, “bone” class IoU – 92.87%, mDice coefficient – 98.41%, mDice coefficient for “bone” – 96.31%, mAccuracy – 99.85% and Accuracy for “bone” class – 99.92%.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2021, 69, 3; art. no. e136751
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-3 z 3

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies