Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "visual learning" wg kryterium: Wszystkie pola


Wyświetlanie 1-3 z 3
Tytuł:
A machine learning-based mobile robot visual homing approach
Autorzy:
Zhu, Q.
Ji, X.
Wang, J.
Cai, C.
Powiązania:
https://bibliotekanauki.pl/articles/201706.pdf
Data publikacji:
2018
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
robot navigation
visual homing
panoramic vision sensors
machine learning
homing performance
nawigacja robotów
panoramiczny czujnik wizyjny
uczenie maszynowe
Opis:
Visual homing enables mobile robots to move towards a previously visited location solely based on panoramic vision sensors. In this paper, a SIFT-based visual homing approach incorporating machine learning is presented. The proposed approach can reduce the impact of inaccurate landmarks on the performance, and generate more precise home direction with simple model. The effectiveness of the proposed approach is verified on both panoramic image databases and actual mobile robot, experimental results reveal that compared to some traditional visual homing methods, the proposed approach exhibits better homing performance and adaptability in both static and dynamic environments.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2018, 66, 5; 621-634
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Multi-model hybrid ensemble weighted adaptive approach with decision level fusion for personalized affect recognition based on visual cues
Autorzy:
Jadhav, Nagesh
Sugandhi, Rekha
Powiązania:
https://bibliotekanauki.pl/articles/2086876.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
deep learning
convolution neural network
emotion recognition
transfer learning
late fusion
uczenie głębokie
konwolucyjna sieć neuronowa
rozpoznawanie emocji
Opis:
In the domain of affective computing different emotional expressions play an important role. To convey the emotional state of human emotions, facial expressions or visual cues are used as an important and primary cue. The facial expressions convey humans affective state more convincingly than any other cues. With the advancement in the deep learning techniques, the convolutional neural network (CNN) can be used to automatically extract the features from the visual cues; however variable sized and biased datasets are a vital challenge to be dealt with as far as implementation of deep models is concerned. Also, the dataset used for training the model plays a significant role in the retrieved results. In this paper, we have proposed a multi-model hybrid ensemble weighted adaptive approach with decision level fusion for personalized affect recognition based on the visual cues. We have used a CNN and pre-trained ResNet-50 model for the transfer learning. VGGFace model’s weights are used to initialize weights of ResNet50 for fine-tuning the model. The proposed system shows significant improvement in test accuracy in affective state recognition compared to the singleton CNN model developed from scratch or transfer learned model. The proposed methodology is validated on The Karolinska Directed Emotional Faces (KDEF) dataset with 77.85% accuracy. The obtained results are promising compared to the existing state of the art methods.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2021, 69, 6; e138819, 1--11
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
MFFNet: A multi-frequency feature extraction and fusion network for visual processing
Autorzy:
Deng, Jinsheng
Zhang, Zhichao
Yin, Xiaoqing
Powiązania:
https://bibliotekanauki.pl/articles/2173678.pdf
Data publikacji:
2022
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
deblurring
multi-feature fusion
deep learning
attention mechanism
rozmywanie
fuzja wielu funkcji
głęboka nauka
mechanizm uwagi
Opis:
Convolutional neural networks have achieved tremendous success in the areas of image processing and computer vision. However, they experience problems with low-frequency information such as semantic and category content and background color, and high-frequency information such as edge and structure. We propose an efficient and accurate deep learning framework called the multi-frequency feature extraction and fusion network (MFFNet) to perform image processing tasks such as deblurring. MFFNet is aided by edge and attention modules to restore high-frequency information and overcomes the multiscale parameter problem and the low-efficiency issue of recurrent architectures. It handles information from multiple paths and extracts features such as edges, colors, positions, and differences. Then, edge detectors and attention modules are aggregated into units to refine and learn knowledge, and efficient multi-learning features are fused into a final perception result. Experimental results indicate that the proposed framework achieves state-of-the-art deblurring performance on benchmark datasets.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2022, 70, 3; art. no. e140466
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-3 z 3

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies