Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "Wang, Yaming" wg kryterium: Autor


Wyświetlanie 1-2 z 2
Tytuł:
Multi-frame Image Super-resolution Reconstruction Using Multi-grained Cascade Forest
Autorzy:
Wang, Yaming
Luo, Zhikang
Huang, Wenqing
Powiązania:
https://bibliotekanauki.pl/articles/226035.pdf
Data publikacji:
2019
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
multi-frame image SR
image registration
SRMCF
image fusion
Opis:
Super-resolution image reconstruction utilizes two algorithms, where one is for single-frame image reconstruction, and the other is for multi-frame image reconstruction. Singleframe image reconstruction generally takes the first degradation and is followed by reconstruction, which essentially creates a problem of insufficient characterization. Multi-frame images provide additional information for image reconstruction relative to single frame images due to the slight differences between sequential frames. However, the existing super-resolution algorithm for multi-frame images do not take advantage of this key factor, either because of loose structure and complexity, or because the individual frames are restored poorly. This paper proposes a new SR reconstruction algorithm for images using Multi-grained Cascade Forest. Multi-frame image reconstruction is processed sequentially. Firstly, the image registration algorithm uses a convolutional neural network to register low-resolution image sequences, and then the images are reconstructed after registration by the Multi-grained Cascade Forest reconstruction algorithm. Finally, the reconstructed images are fused. The optimal algorithm is selected for each step to get the most out of the details and tightly connect the internal logic of each sequential step. This novel approach proposed in this paper, in which the depth of the cascade forest is procedurally generated for recovered images, rather than being a constant. After training each layer, the recovered image is automatically evaluated, and new layers are constructed for training until an optimal restored image is obtained. Experiments show that this method improves the quality of image reconstruction while preserving the details of the image.
Źródło:
International Journal of Electronics and Telecommunications; 2019, 65, 4; 687-692
2300-1933
Pojawia się w:
International Journal of Electronics and Telecommunications
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Detecting objects using Rolling Convolution and Recurrent Neural Network
Autorzy:
Huang, WenQing
Huang, MingZhu
Wang, YaMing
Powiązania:
https://bibliotekanauki.pl/articles/226942.pdf
Data publikacji:
2019
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
multi-scale features
global context information
rolling convolution
recurrent neural network
Opis:
At present, most of the existing target detection algorithms use the method of region proposal to search for the target in the image. The most effective regional proposal method usually requires thousands of target prediction areas to achieve high recall rate.This lowers the detection efficiency. Even though recent region proposal network approach have yielded good results by using hundreds of proposals, it still faces the challenge when applied to small objects and precise locations. This is mainly because these approaches use coarse feature. Therefore, we propose a new method for extracting more efficient global features and multi-scale features to provide target detection performance. Given that feature maps under continuous convolution lose the resolution required to detect small objects when obtaining deeper semantic information; hence, we use rolling convolution (RC) to maintain the high resolution of low-level feature maps to explore objects in greater detail, even if there is no structure dedicated to combining the features of multiple convolutional layers. Furthermore, we use a recurrent neural network of multiple gated recurrent units (GRUs) at the top of the convolutional layer to highlight useful global context locations for assisting in the detection of objects. Through experiments in the benchmark data set, our proposed method achieved 78.2% mAP in PASCAL VOC 2007 and 72.3% mAP in PASCAL VOC 2012 dataset. It has been verified through many experiments that this method has reached a more advanced level of detection.
Źródło:
International Journal of Electronics and Telecommunications; 2019, 65, 2; 293-301
2300-1933
Pojawia się w:
International Journal of Electronics and Telecommunications
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-2 z 2

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies