Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "kernel method" wg kryterium: Temat


Wyświetlanie 1-3 z 3
Tytuł:
A feasible k-means kernel trick under non-Euclidean feature space
Autorzy:
Kłopotek, Robert
Kłopotek, Mieczysław
Wierzchoń, Sławomir
Powiązania:
https://bibliotekanauki.pl/articles/1838163.pdf
Data publikacji:
2020
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
kernel method
k-means
non-Euclidean feature space
Gower and Legendre theorem
Opis:
This paper poses the question of whether or not the usage of the kernel trick is justified. We investigate it for the special case of its usage in the kernel k-means algorithm. Kernel-k-means is a clustering algorithm, allowing clustering data in a similar way to k-means when an embedding of data points into Euclidean space is not provided and instead a matrix of “distances” (dissimilarities) or similarities is available. The kernel trick allows us to by-pass the need of finding an embedding into Euclidean space. We show that the algorithm returns wrong results if the embedding actually does not exist. This means that the embedding must be found prior to the usage of the algorithm. If it is found, then the kernel trick is pointless. If it is not found, the distance matrix needs to be repaired. But the reparation methods require the construction of an embedding, which first makes the kernel trick pointless, because it is not needed, and second, the kernel-k-means may return different clusterings prior to repairing and after repairing so that the value of the clustering is questioned. In the paper, we identify a distance repairing method that produces the same clustering prior to its application and afterwards and does not need to be performed explicitly, so that the embedding does not need to be constructed explicitly. This renders the kernel trick applicable for kernel-k-means.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2020, 30, 4; 703-715
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Comparison of prototype selection algorithms used in construction of neural networks learned by SVD
Autorzy:
Jankowski, N.
Powiązania:
https://bibliotekanauki.pl/articles/330020.pdf
Data publikacji:
2018
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
radial basis function network
extreme learning machine
kernel method
prototype selection
machine learning
k nearest neighbours
radialna funkcja bazowa
metoda jądrowa
uczenie maszynowe
metoda k najbliższych sąsiadów
Opis:
Radial basis function networks (RBFNs) or extreme learning machines (ELMs) can be seen as linear combinations of kernel functions (hidden neurons). Kernels can be constructed in random processes like in ELMs, or the positions of kernels can be initialized by a random subset of training vectors, or kernels can be constructed in a (sub-)learning process (sometimes by k-means, for example). We found that kernels constructed using prototype selection algorithms provide very accurate and stable solutions. What is more, prototype selection algorithms automatically choose not only the placement of prototypes, but also their number. Thanks to this advantage, it is no longer necessary to estimate the number of kernels with time-consuming multiple train-test procedures. The best results of learning can be obtained by pseudo-inverse learning with a singular value decomposition (SVD) algorithm. The article presents a comparison of several prototype selection algorithms co-working with singular value decomposition-based learning. The presented comparison clearly shows that the combination of prototype selection and SVD learning of a neural network is significantly better than a random selection of kernels for the RBFN or the ELM, the support vector machine or the kNN. Moreover, the presented learning scheme requires no parameters except for the width of the Gaussian kernel.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2018, 28, 4; 719-733
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A fast neural network learning algorithm with approximate singular value decomposition
Autorzy:
Jankowski, Norbert
Linowiecki, Rafał
Powiązania:
https://bibliotekanauki.pl/articles/330870.pdf
Data publikacji:
2019
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
Moore–Penrose pseudoinverse
radial basis function network
extreme learning machine
kernel method
machine learning
singular value decomposition
deep extreme learning
principal component analysis
pseudoodwrotność Moore–Penrose
radialna funkcja bazowa
maszyna uczenia ekstremalnego
uczenie maszynowe
analiza składników głównych
Opis:
The learning of neural networks is becoming more and more important. Researchers have constructed dozens of learning algorithms, but it is still necessary to develop faster, more flexible, or more accurate learning algorithms. With fast learning we can examine more learning scenarios for a given problem, especially in the case of meta-learning. In this article we focus on the construction of a much faster learning algorithm and its modifications, especially for nonlinear versions of neural networks. The main idea of this algorithm lies in the usage of fast approximation of the Moore–Penrose pseudo-inverse matrix. The complexity of the original singular value decomposition algorithm is O(mn2). We consider algorithms with a complexity of O(mnl), where l < n and l is often significantly smaller than n. Such learning algorithms can be applied to the learning of radial basis function networks, extreme learning machines or deep ELMs, principal component analysis or even missing data imputation.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2019, 29, 3; 581-594
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-3 z 3

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies