Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "bayes" wg kryterium: Temat


Wyświetlanie 1-3 z 3
Tytuł:
Learning the naive Bayes classifier with optimization models
Autorzy:
Taheri, S.
Mammadov, M.
Powiązania:
https://bibliotekanauki.pl/articles/908351.pdf
Data publikacji:
2013
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
Bayesian networks
naive Bayes classifier
optimization
discretization
sieci bayesowskie
naiwny klasyfikator Bayesa
optymalizacja
dyskretyzacja
Opis:
Naive Bayes is among the simplest probabilistic classifiers. It often performs surprisingly well in many real world applications, despite the strong assumption that all features are conditionally independent given the class. In the learning process of this classifier with the known structure, class probabilities and conditional probabilities are calculated using training data, and then values of these probabilities are used to classify new observations. In this paper, we introduce three novel optimization models for the naive Bayes classifier where both class probabilities and conditional probabilities are considered as variables. The values of these variables are found by solving the corresponding optimization problems. Numerical experiments are conducted on several real world binary classification data sets, where continuous features are discretized by applying three different methods. The performances of these models are compared with the naive Bayes classifier, tree augmented naive Bayes, the SVM, C4.5 and the nearest neighbor classifier. The obtained results demonstrate that the proposed models can significantly improve the performance of the naive Bayes classifier, yet at the same time maintain its simple structure.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2013, 23, 4; 787-795
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
On Naive Bayes in Speech Recognition
Autorzy:
Toth, L.
Kocsor, A.
Csirik, J.
Powiązania:
https://bibliotekanauki.pl/articles/908542.pdf
Data publikacji:
2005
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
naiwny klasyfikator Bayesa
rozpoznawanie mowy
ukryty model Markowa
naive Bayes
segment-based speech recognition
hidden Markov model
Opis:
The currently dominant speech recognition technology, hidden Markov modeling, has long been criticized for its simplistic assumptions about speech, and especially for the naive Bayes combination rule inherent in it. Many sophisticated alternative models have been suggested over the last decade. These, however, have demonstrated only modest improvements and brought no paradigm shift in technology. The goal of this paper is to examine why HMM performs so well in spite of its incorrect bias due to the naive Bayes assumption. To do this we create an algorithmic framework that allows us to experiment with alternative combination schemes and helps us understand the factors that influence recognition performance. From the findings we argue that the bias peculiar to the naive Bayes rule is not really detrimental to phoneme classification performance. Furthermore, it ensures consistent behavior in outlier modeling, allowing efficient management of insertion and deletion errors.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2005, 15, 2; 287-294
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Bayes sharpening of imprecise information
Autorzy:
Kulczycki, B.
Charytanowicz, M.
Powiązania:
https://bibliotekanauki.pl/articles/908526.pdf
Data publikacji:
2005
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
informacja niepewna
współczynnik kondycji
estymator jądrowy
funkcja strat
obliczenia numerycze
imprecise information
sharpening
conditioning factors
kernel estimators
Bayes decision rule
nonsymmetrical loss function
numerical calculations
Opis:
A complete algorithm is presented for the sharpening of imprecise information, based on the methodology of kernel estimators and the Bayes decision rule, including conditioning factors. The use of the Bayes rule with a nonsymmetrical loss function enables the inclusion of different results of an under- and overestimation of a sharp value (real number), as well as minimizing potential losses. A conditional approach allows to obtain a more precise result thanks to using information entered as the assumed (e.g. current) values of conditioning factors of continuous and/or binary types. The nonparametric methodology of statistical kernel estimators freed the investigated procedure from arbitrary assumptions concerning the forms of distributions characterizing both imprecise information and conditioning random variables. The concept presented here is universal and can be applied in a wide range of tasks in contemporary engineering, economics, and medicine.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2005, 15, 3; 393-404
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-3 z 3

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies