Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "m-estimation" wg kryterium: Temat


Wyświetlanie 1-2 z 2
Tytuł:
Specialized, MSE-optimal m-estimators of the rule probability especially suitable for machine learning
Autorzy:
Piegat, A.
Landowski, M.
Powiązania:
https://bibliotekanauki.pl/articles/205508.pdf
Data publikacji:
2014
Wydawca:
Polska Akademia Nauk. Instytut Badań Systemowych PAN
Tematy:
machine learning
rule probability
probability estimation
m-estimators
decision trees
rough set theory
Opis:
The paper presents an improved sample based rule- probability estimation that is an important indicator of the rule quality and credibility in systems of machine learning. It concerns rules obtained, e.g., with the use of decision trees and rough set theory. Particular rules are frequently supported only by a small or very small number of data pieces. The rule probability is mostly investigated with the use of global estimators such as the frequency-, the Laplace-, or the m-estimator constructed for the full probability interval [0,1]. The paper shows that precision of the rule probability estimation can be considerably increased by the use of m-estimators which are specialized for the interval [phmin, phmax] given by the problem expert. The paper also presents a new interpretation of the m-estimator parameters that can be optimized in the estimators.
Źródło:
Control and Cybernetics; 2014, 43, 1; 133-160
0324-8569
Pojawia się w:
Control and Cybernetics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Revisiting the optimal probability estimator from small samples for data mining
Autorzy:
Cestnik, Bojan
Powiązania:
https://bibliotekanauki.pl/articles/330350.pdf
Data publikacji:
2019
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
probability estimation
small sample
minimal error
m-estimate
estymacja prawdopodobieństwa
mała próbka
błąd minimalny
Opis:
Estimation of probabilities from empirical data samples has drawn close attention in the scientific community and has been identified as a crucial phase in many machine learning and knowledge discovery research projects and applications. In addition to trivial and straightforward estimation with relative frequency, more elaborated probability estimation methods from small samples were proposed and applied in practice (e.g., Laplace’s rule, the m-estimate). Piegat and Landowski (2012) proposed a novel probability estimation method from small samples Eph√2 that is optimal according to the mean absolute error of the estimation result. In this paper we show that, even though the articulation of Piegat’s formula seems different, it is in fact a special case of the m-estimate, where pa = 1/2 and m = √2. In the context of an experimental framework, we present an in-depth analysis of several probability estimation methods with respect to their mean absolute errors and demonstrate their potential advantages and disadvantages. We extend the analysis from single instance samples to samples with a moderate number of instances. We define small samples for the purpose of estimating probabilities as samples containing either less than four successes or less than four failures and justify the definition by analysing probability estimation errors on various sample sizes.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2019, 29, 4; 783-796
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-2 z 2

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies