Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "metoda jądrowa" wg kryterium: Temat


Wyświetlanie 1-3 z 3
Tytuł:
Kernel Ho-Kashyap classifier with generalization control
Autorzy:
Łęski, J.
Powiązania:
https://bibliotekanauki.pl/articles/907269.pdf
Data publikacji:
2004
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
metoda jądrowa
metoda odporna
projekt klasyfikatora
kernel methods
classifier design
Ho-Kashyap classifier
generalization control
robust methods
Opis:
This paper introduces a new classifier design method based on a kernel extension of the classical Ho-Kashyap procedure. The proposed method uses an approximation of the absolute error rather than the squared error to design a classifier, which leads to robustness against outliers and a better approximation of the misclassification error. Additionally, easy control of the generalization ability is obtained using the structural risk minimization induction principle from statistical learning theory. Finally, examples are given to demonstrate the validity of the introduced method.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2004, 14, 1; 53-61
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Comparison of prototype selection algorithms used in construction of neural networks learned by SVD
Autorzy:
Jankowski, N.
Powiązania:
https://bibliotekanauki.pl/articles/330020.pdf
Data publikacji:
2018
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
radial basis function network
extreme learning machine
kernel method
prototype selection
machine learning
k nearest neighbours
radialna funkcja bazowa
metoda jądrowa
uczenie maszynowe
metoda k najbliższych sąsiadów
Opis:
Radial basis function networks (RBFNs) or extreme learning machines (ELMs) can be seen as linear combinations of kernel functions (hidden neurons). Kernels can be constructed in random processes like in ELMs, or the positions of kernels can be initialized by a random subset of training vectors, or kernels can be constructed in a (sub-)learning process (sometimes by k-means, for example). We found that kernels constructed using prototype selection algorithms provide very accurate and stable solutions. What is more, prototype selection algorithms automatically choose not only the placement of prototypes, but also their number. Thanks to this advantage, it is no longer necessary to estimate the number of kernels with time-consuming multiple train-test procedures. The best results of learning can be obtained by pseudo-inverse learning with a singular value decomposition (SVD) algorithm. The article presents a comparison of several prototype selection algorithms co-working with singular value decomposition-based learning. The presented comparison clearly shows that the combination of prototype selection and SVD learning of a neural network is significantly better than a random selection of kernels for the RBFN or the ELM, the support vector machine or the kNN. Moreover, the presented learning scheme requires no parameters except for the width of the Gaussian kernel.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2018, 28, 4; 719-733
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A complete gradient clustering algorithm formed with kernel estimators
Autorzy:
Kulczycki, P.
Charytanowicz, M.
Powiązania:
https://bibliotekanauki.pl/articles/907781.pdf
Data publikacji:
2010
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
analiza danych
eksploracja danych
grupowanie
metoda statystyczna
estymacja jądrowa
obliczenia numeryczne
data analysis
data mining
clustering
gradient procedures
nonparametric statistical methods
kernel estimators
numerical calculations
Opis:
The aim of this paper is to provide a gradient clustering algorithm in its complete form, suitable for direct use without requiring a deeper statistical knowledge. The values of all parameters are effectively calculated using optimizing procedures. Moreover, an illustrative analysis of the meaning of particular parameters is shown, followed by the effects resulting from possible modifications with respect to their primarily assigned optimal values. The proposed algorithm does not demand strict assumptions regarding the desired number of clusters, which allows the obtained number to be better suited to a real data structure. Moreover, a feature specific to it is the possibility to influence the proportion between the number of clusters in areas where data elements are dense as opposed to their sparse regions. Finally, the algorithm-by the detection of one-element clusters-allows identifying atypical elements, which enables their elimination or possible designation to bigger clusters, thus increasing the homogeneity of the data set.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2010, 20, 1; 123-134
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-3 z 3

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies