It is well-known that artificial neural networks have the ability to learn based on the provisions of new data. A special case of the so-called supervised learning is a mutual learning of two neural networks. This type of learning applied to a specific networks called Tree Parity Machines (abbreviated as TPM networks) leads to achieving consistent weight vectors in both of them. Such phenomenon is called a network synchronization and can be exploited while constructing cryptographic key exchange protocol. At the beginning of the learning process both networks have initialized weights values as random. The time needed to synchronize both networks depends on their initial weights values and the input vectors which are also randomly generated at each step of learning. In this paper the relationship between the distribution, from which the initial weights of the network are drawn, and their compatibility is discussed. In order to measure the initial compatibility of the weights, the modified Euclidean metric is invoked here. Such a tool permits to determine the compatibility of the network weights’ scaling in regard to the size of the network. The proper understanding of the latter permits in turn to compare TPM networks of various sizes. This paper contains the results of the simulation and their discussion in the context of the above mentioned issue.
Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies
Informacja
SZANOWNI CZYTELNICY!
UPRZEJMIE INFORMUJEMY, ŻE BIBLIOTEKA FUNKCJONUJE W NASTĘPUJĄCYCH GODZINACH:
Wypożyczalnia i Czytelnia Główna: poniedziałek – piątek od 9.00 do 19.00