Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "Markov chain modeling" wg kryterium: Temat


Wyświetlanie 1-4 z 4
Tytuł:
Asymptotic guarantee of success for multi-agent memetic systems
Autorzy:
Byrski, A.
Schaefer, R.
Smołka, M.
Cotta, C.
Powiązania:
https://bibliotekanauki.pl/articles/201942.pdf
Data publikacji:
2013
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
computational multi-agent systems
asymptotic analysis
global optimization
parallel evolutionary algorithms
Markov chain modeling
Opis:
The paper introduces a stochastic model for a class of population-based global optimization meta-heuristics, that generalizes existing models in the following ways. First of all, an individual becomes an active software agent characterized by the constant genotype and the meme that may change during the optimization process. Second, the model embraces the asynchronous processing of agent’s actions. Third, we consider a vast variety of possible actions that include the conventional mixing operations (e.g. mutation, cloning, crossover) as well as migrations among demes and local optimization methods. Despite the fact that the model fits many popular algorithms and strategies (e.g. genetic algorithms with tournament selection) it is mainly devoted to study memetic algorithms. The model is composed of two parts: EMAS architecture (data structures and management strategies) allowing to define the space of states and the framework for stochastic agent actions and the stationary Markov chain described in terms of this architecture. The probability transition function has been obtained and the Markov kernels for sample actions have been computed. The obtained theoretical results are helpful for studying metaheuristics conforming to the EMAS architecture. The designed synchronization allows the safe, coarse-grained parallel implementation and its effective, sub-optimal scheduling in a distributed computer environment. The proved strong ergodicity of the finite state Markov chain results in the asymptotic stochastic guarantee of success, which in turn imposes the liveness of a studied metaheuristic. The Markov chain delivers the sampling measure at an arbitrary step of computations, which allows further asymptotic studies, e.g. on various kinds of the stochastic convergence.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2013, 61, 1; 257-278
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
The island model as a Markov dynamic system
Autorzy:
Schaefer, R.
Byrski, A.
Smołka, M.
Powiązania:
https://bibliotekanauki.pl/articles/331253.pdf
Data publikacji:
2012
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
algorytm genetyczny
analiza asymptotyczna
optymalizacja globalna
algorytm ewolucyjny równoległy
łańcuch Markova
genetic algorithms
asymptotic analysis
global optimization
parallel evolutionary algorithms
Markov chain modeling
Opis:
Parallel multi-deme genetic algorithms are especially advantageous because they allow reducing the time of computations and can perform a much broader search than single-population ones. However, their formal analysis does not seem to have been studied exhaustively enough. In this paper we propose a mathematical framework describing a wide class of island-like strategies as a stationary Markov chain. Our approach uses extensively the modeling principles introduced by Vose, Rudolph and their collaborators. An original and crucial feature of the framework we propose is the mechanism of inter-deme agent operation synchronization. It is important from both a practical and a theoretical point of view. We show that under a mild assumption the resulting Markov chain is ergodic and the sequence of the related sampling measures converges to some invariant measure. The asymptotic guarantee of success is also obtained as a simple issue of ergodicity. Moreover, if the cardinality of each island population grows to infinity, then the sequence of the limit invariant measures contains a weakly convergent subsequence. The formal description of the island model obtained for the case of solving a single-objective problem can also be extended to the multi-objective case.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2012, 22, 4; 971-984
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Intelligent enterprise capital control based on Markov chain
Autorzy:
Andriushchenko, Kateryna
Liezina, Anastasiia
Lavruk, Vitalii
Sliusareva, Liudmyla
Rudevska, Viktoriia
Powiązania:
https://bibliotekanauki.pl/articles/2175198.pdf
Data publikacji:
2022
Wydawca:
Centrum Badań i Innowacji Pro-Akademia
Tematy:
Markov chain
intelligent control
stochastic modeling
investments
łańcuch Markowa
inteligentne sterowanie
modelowanie stochastyczne
inwestycje
Opis:
This scientific work is devoted to the processes of creating technologies, as well as the use of their mathematical representation in the form of models in the context of the formation and development of the intellectual capital of an enterprise. To select a goal, a vision was formed to prove or refute any possibility of using Markov's theory in practice, namely the creation of a stochastic model of the intellectual capital of an enterprise in monetary terms, which manifests itself in investments in intangible assets. As an initial model hypothesis, the statement is accepted that investments in the enterprise's intangible assets are a factor in the transformation of intellectual capital into the company's value. Based on the results of applying the stochastic Markov chain model, the potential profit of the company's intangible assets was estimated, the main elements of which were intellectual capital assets during the study. A matrix of transition probabilities has been formed and modeling of the limiting probabilities of the system states has been implemented. The necessary conditions and boundaries of the scope of the mathematical model are also determined. The mathematical method of modeling the company's intellectual capital proposed in the article allows determining the contribution of each of the structural components to the formation of the value of the enterprises intellectual capital, thereby making it possible to establish a current balance between all its elements, which contributes to a comprehensive study of the company's intellectual assets.
Źródło:
Acta Innovations; 2022, 45; 18--30
2300-5599
Pojawia się w:
Acta Innovations
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Using discrete Markov chains in prediction of health economics behaviour
Autorzy:
Bauer, W.
Wieczór, A.
Powiązania:
https://bibliotekanauki.pl/articles/94921.pdf
Data publikacji:
2017
Wydawca:
Szkoła Główna Gospodarstwa Wiejskiego w Warszawie. Wydawnictwo Szkoły Głównej Gospodarstwa Wiejskiego w Warszawie
Tematy:
economic behavior
Primary Health Care
stochastic process modeling
Markov chain
Monte Carlo method
MCMC
PHC
Opis:
The aim of this article is show the concept of using of the Discrete Markov Chains to predict economic phenomena. This subject is important for two reasons. The first of them are models based on Markov chains use the statistical informations obtained during the investigation processes. Another important reason is the fact that this way of modeling is highly flexible and can be used to simulation of economic phenomenas. In this paper authors describe the idea of modeling and present the example of simply model of patient population of primary health care and show preliminary simulation results.
Źródło:
Information Systems in Management; 2017, 6, 4; 259-269
2084-5537
2544-1728
Pojawia się w:
Information Systems in Management
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-4 z 4

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies