Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "Fan, Rui" wg kryterium: Autor


Wyświetlanie 1-2 z 2
Tytuł:
Finding robust transfer features for unsupervised domain adaptation
Autorzy:
Gao, Depeng
Wu, Rui
Liu, Jiafeng
Fan, Xiaopeng
Tang, Xianglong
Powiązania:
https://bibliotekanauki.pl/articles/331356.pdf
Data publikacji:
2020
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
unsupervised domain adaptation
feature reduction
generalized eigenvalue decomposition
object recognition
adaptacja domeny
redukcja cech
rozkład wartości własnych
rozpoznawanie obiektu
Opis:
An insufficient number or lack of training samples is a bottleneck in traditional machine learning and object recognition. Recently, unsupervised domain adaptation has been proposed and then widely applied for cross-domain object recognition, which can utilize the labeled samples from a source domain to improve the classification performance in a target domain where no labeled sample is available. The two domains have the same feature and label spaces but different distributions. Most existing approaches aim to learn new representations of samples in source and target domains by reducing the distribution discrepancy between domains while maximizing the covariance of all samples. However, they ignore subspace discrimination, which is essential for classification. Recently, some approaches have incorporated discriminative information of source samples, but the learned space tends to be overfitted on these samples, because they do not consider the structure information of target samples. Therefore, we propose a feature reduction approach to learn robust transfer features for reducing the distribution discrepancy between domains and preserving discriminative information of the source domain and the local structure of the target domain. Experimental results on several well-known cross-domain datasets show that the proposed method outperforms state-of-the-art techniques in most cases.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2020, 30, 1; 99-112
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Utilizing relevant RGB-D data to help recognize RGB images in the target domain
Autorzy:
Gao, Depeng
Liu, Jiafeng
Wu, Rui
Cheng, Dansong
Fan, Xiaopeng
Tang, Xianglong
Powiązania:
https://bibliotekanauki.pl/articles/329725.pdf
Data publikacji:
2019
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
object recognition
RGB-D image
transfer learning
privileged information
rozpoznawanie obiektu
obraz RGB-D
uczenie maszynowe
informacja poufna
Opis:
With the advent of 3D cameras, getting depth information along with RGB images has been facilitated, which is helpful in various computer vision tasks. However, there are two challenges in using these RGB-D images to help recognize RGB images captured by conventional cameras: one is that the depth images are missing at the testing stage, the other is that the training and test data are drawn from different distributions as they are captured using different equipment. To jointly address the two challenges, we propose an asymmetrical transfer learning framework, wherein three classifiers are trained using the RGB and depth images in the source domain and RGB images in the target domain with a structural risk minimization criterion and regularization theory. A cross-modality co-regularizer is used to restrict the two-source classifier in a consistent manner to increase accuracy. Moreover, an L2,1 norm cross-domain co-regularizer is used to magnify significant visual features and inhibit insignificant ones in the weight vectors of the two RGB classifiers. Thus, using the cross-modality and cross-domain co-regularizer, the knowledge of RGB-D images in the source domain is transferred to the target domain to improve the target classifier. The results of the experiment show that the proposed method is one of the most effective ones.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2019, 29, 3; 611-621
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-2 z 2

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies