Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "virtual team" wg kryterium: Temat


Wyświetlanie 1-1 z 1
Tytuł:
Swarm intelligence algorithm based on competitive predators with dynamic virtual teams
Autorzy:
Yang, S.
Sato, Y.
Powiązania:
https://bibliotekanauki.pl/articles/91592.pdf
Data publikacji:
2017
Wydawca:
Społeczna Akademia Nauk w Łodzi. Polskie Towarzystwo Sieci Neuronowych
Tematy:
swarm intelligence
sitness predator optimizer
dynamic virtual team
population diversity
Opis:
In our previous work, Fitness Predator Optimizer (FPO) is proposed to avoid premature convergence for multimodal problems. In FPO, all of the particles are seen as predators. Only the competitive, powerful predator that are selected as an elite could achieve the limited opportunity to update. The elite generation with roulette wheel selection could increase individual independence and reduce rapid social collaboration. Experimental results show that FPO is able to provide excellent performance of global exploration and local minima avoidance simultaneously. However, to the higher dimensionality of multimodal problem, the slow convergence speed becomes the bottleneck of FPO. A dynamic team model is utilized in FPO, named DFPO to accelerate the early convergence rate. In this paper, DFPO is more precisely described and its variant, DFPO-r is proposed to improve the performance of DFPO. A method of team size selection is proposed in DFPO-r to increase population diversity. The population diversity is one of the most important factors that determines the performance of the optimization algorithm. A higher degree of population diversity is able to help DFPO-r alleviate a premature convergence. The strategy of selection is to choose team size according to the higher degree of population diversity. Ten well-known multimodal benchmark functions are used to evaluate the solution capability of DFPO and DFPO-r. Six benchmark functions are extensively set to 100 dimensions to investigate the performance of DFPO and DFPO-r compared with LBest PSO, Dolphin Partner Optimization and FPO. Experimental results show that both DFPO and DFPO-r could demonstrate the desirable performance. Furthermore, DFPO-r shows better robustness performance compared with DFPO in experimental study.
Źródło:
Journal of Artificial Intelligence and Soft Computing Research; 2017, 7, 2; 87-101
2083-2567
2449-6499
Pojawia się w:
Journal of Artificial Intelligence and Soft Computing Research
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-1 z 1

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies