Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "prototype selection" wg kryterium: Temat


Wyświetlanie 1-4 z 4
Tytuł:
Parallel MCNN (PMCNN) with application to prototype selection on large and streaming data
Autorzy:
Devi, V. S.
Meena, L.
Powiązania:
https://bibliotekanauki.pl/articles/91686.pdf
Data publikacji:
2017
Wydawca:
Społeczna Akademia Nauk w Łodzi. Polskie Towarzystwo Sieci Neuronowych
Tematy:
prototype selection
one-pass algorithm
streaming data
distributed algorithm
Opis:
The Modified Condensed Nearest Neighbour (MCNN) algorithm for prototype selection is order-independent, unlike the Condensed Nearest Neighbour (CNN) algorithm. Though MCNN gives better performance, the time requirement is much higher than for CNN. To mitigate this, we propose a distributed approach called Parallel MCNN (pMCNN) which cuts down the time drastically while maintaining good performance. We have proposed two incremental algorithms using MCNN to carry out prototype selection on large and streaming data. The results of these algorithms using MCNN and pMCNN have been compared with an existing algorithm for streaming data.
Źródło:
Journal of Artificial Intelligence and Soft Computing Research; 2017, 7, 3; 155-169
2083-2567
2449-6499
Pojawia się w:
Journal of Artificial Intelligence and Soft Computing Research
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
The PM-M prototype selection system
Autorzy:
Grudziński, K.
Powiązania:
https://bibliotekanauki.pl/articles/206602.pdf
Data publikacji:
2016
Wydawca:
Polska Akademia Nauk. Instytut Badań Systemowych PAN
Tematy:
selection of reference instances
prototype selection
k-Nearest Neighbors algorithm
classification of data
Opis:
In this paper, the algorithm, realizing the author’s prototype selection method, called PM-M (Partial Memory - Minimization) is described in details. Computational experiments that have been carried out with the raw PM-M model and with its majority ensembles indicate that even for the system, for which the average size of the selected prototype sets constitutes only about five percent of the size of the original training datasets, the obtained results of classification are still in a good statistical agreement with the 1-Nearest Neighbor (IB1) model which has been trained on the original (i.e. unpruned) data. It has also been shown that the system under study is competitive in terms of generalization ability with respect to other well established prototype selection systems, such as, for example, CHC, SSMA and GGA. Moreover, the proposed algorithm has shown approximately one to three orders of magnitude decrement of time requirements with respect to the necessary time, needed to complete the calculations, relative to the reference prototype classifiers, taken for comparison. It has also been demonstrated that the PM-M system can be directly applied to analysis of very large data unlike most other prototype methods, which have to rely on the stratification approach.
Źródło:
Control and Cybernetics; 2016, 45, 4; 539-561
0324-8569
Pojawia się w:
Control and Cybernetics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Comparison of prototype selection algorithms used in construction of neural networks learned by SVD
Autorzy:
Jankowski, N.
Powiązania:
https://bibliotekanauki.pl/articles/330020.pdf
Data publikacji:
2018
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
radial basis function network
extreme learning machine
kernel method
prototype selection
machine learning
k nearest neighbours
radialna funkcja bazowa
metoda jądrowa
uczenie maszynowe
metoda k najbliższych sąsiadów
Opis:
Radial basis function networks (RBFNs) or extreme learning machines (ELMs) can be seen as linear combinations of kernel functions (hidden neurons). Kernels can be constructed in random processes like in ELMs, or the positions of kernels can be initialized by a random subset of training vectors, or kernels can be constructed in a (sub-)learning process (sometimes by k-means, for example). We found that kernels constructed using prototype selection algorithms provide very accurate and stable solutions. What is more, prototype selection algorithms automatically choose not only the placement of prototypes, but also their number. Thanks to this advantage, it is no longer necessary to estimate the number of kernels with time-consuming multiple train-test procedures. The best results of learning can be obtained by pseudo-inverse learning with a singular value decomposition (SVD) algorithm. The article presents a comparison of several prototype selection algorithms co-working with singular value decomposition-based learning. The presented comparison clearly shows that the combination of prototype selection and SVD learning of a neural network is significantly better than a random selection of kernels for the RBFN or the ELM, the support vector machine or the kNN. Moreover, the presented learning scheme requires no parameters except for the width of the Gaussian kernel.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2018, 28, 4; 719-733
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Selection of prototypes with the EkP system
Autorzy:
Grudziński, K.
Powiązania:
https://bibliotekanauki.pl/articles/969794.pdf
Data publikacji:
2010
Wydawca:
Polska Akademia Nauk. Instytut Badań Systemowych PAN
Tematy:
classification of data
selection of reference vectors
prototype methods
Opis:
A new system for selection of reference instances, which is called the EkP system (Exactly k Prototypes), has been introduced by us recently. In this paper we study suitability of the EkP method for training data reduction on seventeen datasets. As the underlaying classifier the well known IB1 system (1-Nearest Neighbor classifier) has been chosen. We compare generalization ability of our method to performance of IB1 trained on the entire training data and performance of LVQ, Learning Vector Quantization, for which the same number of codebooks has been chosen as the number of prototypes selected by the EkP system. The comparison indicates that even with only a few prototypes which have been chosen by the EkP method on nearly all seventeen datasets statistically indistinguishable results from those given by the IB1 system have been obtained. On many datasets generalization ability of the EkP method has been higher than the one attained with LVQ.
Źródło:
Control and Cybernetics; 2010, 39, 2; 487-503
0324-8569
Pojawia się w:
Control and Cybernetics
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-4 z 4

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies