Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "speech recognition" wg kryterium: Temat


Tytuł:
Enhancing Speech Recognition in Adverse Listening Environments: The Impact of Brief Musical Training on Older Adults
Autorzy:
Nandakumar, Akhila R
Somashekara, Haralakatta Shivananjappa
Kanagokar, Vibha
Pitchaimuthu, Arivudai Nambi
Powiązania:
https://bibliotekanauki.pl/articles/31339763.pdf
Data publikacji:
2024
Wydawca:
Polska Akademia Nauk. Czasopisma i Monografie PAN
Tematy:
musical training
carnatic music
speech recognition in noise
speech recognition in reverberation
Opis:
The present research investigated the effects of short-term musical training on speech recognition in adverse listening conditions in older adults. A total of 30 Kannada-speaking participants with no history of gross otologic, neurologic, or cognitive problems were divided equally into experimental (M = 63 years) and control groups (M = 65 years). Baseline and follow-up assessments for speech in noise (SNR50) and reverberation was carried out for both groups. The participants in the experimental group were subjected to Carnatic classical music training, which lasted for seven days. The Bayesian likelihood estimates revealed no difference in SNR50 and speech recognition scores in reverberation between baseline and followed-up assessment for the control group. Whereas, in the experimental group, the SNR50 reduced, and speech recognition scores improved following musical training, suggesting the positive impact of music training. The improved performance on speech recognition suggests that short-term musical training using Carnatic music can be used as a potential tool to improve speech recognition abilities in adverse listening conditions in older adults.
Źródło:
Archives of Acoustics; 2024, 49, 1; 3-9
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Phoneme Segmentation Based on Wavelet Spectra Analysis
Autorzy:
Ziółko, B.
Manandhar, S.
Wilson, R. C.
Ziółko, M.
Powiązania:
https://bibliotekanauki.pl/articles/177480.pdf
Data publikacji:
2011
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
speech recognition
speech segmentation
discrete wavelet transform
Opis:
A phoneme segmentation method based on the analysis of discrete wavelet transform spectra is described. The localization of phoneme boundaries is particularly useful in speech recognition. It enables one to use more accurate acoustic models since the length of phonemes provide more information for parametrization. Our method relies on the values of power envelopes and their first derivatives for six frequency subbands. Specific scenarios that are typical for phoneme boundaries are searched for. Discrete times with such events are noted and graded using a distribution-like event function, which represent the change of the energy distribution in the frequency domain. The exact definition of this method is described in the paper. The final decision on localization of boundaries is taken by analysis of the event function. Boundaries are, therefore, extracted using information from all subbands. The method was developed on a small set of Polish hand segmented words and tested on another large corpus containing 16 425 utterances. A recall and precision measure specifically designed to measure the quality of speech segmentation was adapted by using fuzzy sets. From this, results with F-score equal to 72.49% were obtained.
Źródło:
Archives of Acoustics; 2011, 36, 1; 29-47
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Estimation of Hardware Requirements for Isolated Speech Recognition on an Embedded Systems
Autorzy:
Kłobucki, K.
Mąka, T.
Powiązania:
https://bibliotekanauki.pl/articles/227186.pdf
Data publikacji:
2011
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
isolated speech recognition
ASR
resources estimation
Opis:
In recent years, speech recognition functionality is increasingly being added in embedded devices. Because of limited resources in these devices, there is a need to assess whether the defined speech recognition system is feasible within given constraints, as well as estimating how many resources the system needs. In this paper, an attempt has been taken to define a technique for estimating hardware resources usage in the speech recognition task. To determine the parameters and their dependencies in this task, the two systems were tested. The first system utilized Dynamic Time Warping pattern matching technique, the second used Hidden Markov Models. For each case, the measurement of recognition rate and time, vocabulary database size and learning time has been performed. Obtained results have been exploited to define linear and polynomial regression models, and finally, an estimation algorithm has been developed using these models. After testing proposed approach, it was observed that even low-end mobile phones have sufficient hardware resources for realisation of isolated speech recognition system.
Źródło:
International Journal of Electronics and Telecommunications; 2011, 57, 4; 483-488
2300-1933
Pojawia się w:
International Journal of Electronics and Telecommunications
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Sentence Recognition in the Presence of Competing Speech Messages Presented in Audiometric Booths with Reverberation Times of 0.4 and 0.6 Seconds
Autorzy:
Abouchacra, K. S.
Koehnke, J.
Besing, J.
Letowski, T.
Powiązania:
https://bibliotekanauki.pl/articles/177482.pdf
Data publikacji:
2011
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
sound field testing
reverberation
speech recognition
Opis:
This study examined whether differences in reverberation time (RT) between typical sound field test rooms used in audiology clinics have an effect on speech recognition in multi-talker environments. Separate groups of participants listened to target speech sentences presented simultaneously with 0-to-3 competing sentences through four spatially-separated loudspeakers in two sound field test rooms having RT = 0:6 sec (Site 1: N = 16) and RT = 0:4 sec (Site 2: N = 12). Speech recognition scores (SRSs) for the Synchronized Sentence Set (S3) test and subjective estimates of perceived task difficulty were recorded. Obtained results indicate that the change in room RT from 0.4 to 0.6 sec did not significantly influence SRSs in quiet or in the presence of one competing sentence. However, this small change in RT affected SRSs when 2 and 3 competing sentences were present, resulting in mean SRSs that were about 8–10% better in the room with RT = 0:4 sec. Perceived task difficulty ratings increased as the complexity of the task increased, with average ratings similar across test sites for each level of sentence competition. These results suggest that site-specific normative data must be collected for sound field rooms if clinicians would like to use two or more directional speech maskers during routine sound field testing.
Źródło:
Archives of Acoustics; 2011, 36, 1; 3-14
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
System rozpoznawania mowy z ograniczonym słownikiem
Speech recognition system with limited dictionary
Autorzy:
Grabowski, D.
Kwiatkowska, M.
Świerczewski, Ł.
Powiązania:
https://bibliotekanauki.pl/articles/131953.pdf
Data publikacji:
2014
Wydawca:
Wrocławska Wyższa Szkoła Informatyki Stosowanej Horyzont
Tematy:
rozpoznawanie mowy
ASR
MFCC
speech recognition
Opis:
Motywacją w pisanej pracy jest omówienie i porównanie popularnych algorytmów rozpoznawania mowy na różnych systemach. Zebrane informacje są przedstawione w stosunkowo krótkiej formie, bez wnikliwej analizy dowodów matematycznych, do których przedstawienia i tak potrzebne jest odniesienie się do odrębnych specjalistycznych źródeł. Omówione zostały tutaj problemy pewne związane z ASR (ang. Automatic Speech Recognition) i perspektywy na rozwiązanie ich. Na podstawie dostępnych rozwiązań stworzony został moduł aplikacji umożliwiający porównywanie zebranych nagrań pod kątem podobieństwa sygnału mowy i przedstawienie wyników w formie tabelarycznej. Stworzona biblioteka w celach prezentacyjnych została użyta do pełnej aplikacji umożliwiającej wykonywanie rozkazów na podstawie słów wypowiadanych do mikrofonu. Wyniki posłużą nie tyle za ostateczne wnioski w tematyce rozpoznawania mowy, co za wskazówki do kolejnych analiz i badań. Mimo postępów w badaniach nad ASR, nadal nie ma algorytmów o skuteczności przekraczającej 95%. Motywacją do dalszych działań może być np. społeczne wykluczenie ludzi nie mogących posługiwać się komunikacją polegającą na wzroku.
Motivation of this thesis is discussion about popular ASR algorithms and comparision on various architectures. Collected results are presented in relatively short shape. It’s done without math argumentation because it could depend on complicated equations. Here are discussed some problems associated with ASR (Automatic Speech Recognition) and the prospects for a solution to their. On the basis of available solutions it was developed application module that allows comparison of collected recordings in respect of similarity of the speech signal and present the results in tabular form. For presentation purposes it has been created a library and it was used in complete application that allows execution of commands based on the words spoken to microphone. The results will be used not only for the final conclusions about ASR, what clues for further analysis and research. Despite the advances in research on ASR, still there are no algorithms for effectiveness in excess of 95%. The motivation for further actions may be, eg, the social exclusion of people who can not use the communication involving the eye
Źródło:
Biuletyn Naukowy Wrocławskiej Wyższej Szkoły Informatyki Stosowanej. Informatyka; 2014, 4; 44-53
2082-9892
Pojawia się w:
Biuletyn Naukowy Wrocławskiej Wyższej Szkoły Informatyki Stosowanej. Informatyka
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Two-Microphone Dereverberation for Automatic Speech Recognition of Polish
Autorzy:
Kundegorski, M.
Jackson, P. J. B.
Ziółko, B.
Powiązania:
https://bibliotekanauki.pl/articles/176431.pdf
Data publikacji:
2014
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
speech enhancement
reverberation
automatic speech recognition
ASR
Polish
Opis:
Reverberation is a common problem for many speech technologies, such as automatic speech recogni- tion (ASR) systems. This paper investigates the novel combination of precedence, binaural and statistical independence cues for enhancing reverberant speech, prior to ASR, under these adverse acoustical con- ditions when two microphone signals are available. Results of the enhancement are evaluated in terms of relevant signal measures and accuracy for both English and Polish ASR tasks. These show inconsistencies between the signal and recognition measures, although in recognition the proposed method consistently outperforms all other combinations and the spectral-subtraction baseline.
Źródło:
Archives of Acoustics; 2014, 39, 3; 411-420
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Spectral methods in Polish emotional speech recognition
Autorzy:
Powroźnik, P.
Czerwiński, D.
Powiązania:
https://bibliotekanauki.pl/articles/102087.pdf
Data publikacji:
2016
Wydawca:
Stowarzyszenie Inżynierów i Techników Mechaników Polskich
Tematy:
artificial neural network
spectrogram
emotional speech recognition
Opis:
In this article the issue of emotion recognition based on Polish emotional speech signal analysis was presented. The Polish database of emotional speech, prepared and shared by the Medical Electronics Division of the Lodz University of Technology, has been used for research. Speech signal has been processed by Artificial Neural Networks (ANN). The inputs for ANN were information obtained from signal spectrogram. Researches were conducted for three different spectrogram divisions. The ANN consists of four layers but the number of neurons in each layer depends of spectrogram division. Conducted researches focused on six emotional states: a neutral state, sadness, joy, anger, fear and boredom. The averange effectiveness of emotions recognition was about 80%.
Źródło:
Advances in Science and Technology. Research Journal; 2016, 10, 32; 73-81
2299-8624
Pojawia się w:
Advances in Science and Technology. Research Journal
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Hybrid CNN-Ligru acoustic modeling using sincnet raw waveform for hindi ASR
Autorzy:
Kumar, Ankit
Aggarwal, Rajesh Kumar
Powiązania:
https://bibliotekanauki.pl/articles/1839250.pdf
Data publikacji:
2020
Wydawca:
Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie. Wydawnictwo AGH
Tematy:
automatic speech recognition
CNN
CNN-LiGRU
DNN
Opis:
Deep neural networks (DNN) currently play a most vital role in automatic speech recognition (ASR). The convolution neural network (CNN) and recurrent neural network (RNN) are advanced versions of DNN. They are right to deal with the spatial and temporal properties of a speech signal, and both properties have a higher impact on accuracy. With its raw speech signal, CNN shows its superiority over precomputed acoustic features. Recently, a novel first convolution layer named SincNet was proposed to increase interpretability and system performance. In this work, we propose to combine SincNet-CNN with a light-gated recurrent unit (LiGRU) to help reduce the computational load and increase interpretability with a high accuracy. Different configurations of the hybrid model are extensively examined to achieve this goal. All of the experiments were conducted using the Kaldi and Pytorch-Kaldi toolkit with the Hindi speech dataset. The proposed model reports an 8.0% word error rate (WER).
Źródło:
Computer Science; 2020, 21 (4); 397-417
1508-2806
2300-7036
Pojawia się w:
Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Recognition of the numbers in the Polish language
Autorzy:
Plichta, A.
Gąciarz, T.
Krzywdziński, T.
Powiązania:
https://bibliotekanauki.pl/articles/308844.pdf
Data publikacji:
2013
Wydawca:
Instytut Łączności - Państwowy Instytut Badawczy
Tematy:
Automatic Speech Recognition
compressed sensing
Sparse Classification
Opis:
Automatic Speech Recognition is one of the hottest research and application problems in today’s ICT technologies. Huge progress in the development of the intelligent mobile systems needs an implementation of the new services, where users can communicate with devices by sending audio commands. Those systems must be additionally integrated with the highly distributed infrastructures such as computational and mobile clouds, Wireless Sensor Networks (WSNs), and many others. This paper presents the recent research results for the recognition of the separate words and words in short contexts (limited to the numbers) articulated in the Polish language. Compressed Sensing Theory (CST) is applied for the first time as a methodology of speech recognition. The effectiveness of the proposed methodology is justified in numerical tests for both separate words and short sentences.
Źródło:
Journal of Telecommunications and Information Technology; 2013, 4; 70-78
1509-4553
1899-8852
Pojawia się w:
Journal of Telecommunications and Information Technology
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Deep Belief Neural Networks and Bidirectional Long-Short Term Memory Hybrid for Speech Recognition
Autorzy:
Brocki, Ł.
Marasek, K.
Powiązania:
https://bibliotekanauki.pl/articles/177625.pdf
Data publikacji:
2015
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
deep belief neural networks
long-short term memory
bidirectional recurrent neural networks
speech recognition
large vocabulary continuous speech recognition
Opis:
This paper describes a Deep Belief Neural Network (DBNN) and Bidirectional Long-Short Term Memory (LSTM) hybrid used as an acoustic model for Speech Recognition. It was demonstrated by many independent researchers that DBNNs exhibit superior performance to other known machine learning frameworks in terms of speech recognition accuracy. Their superiority comes from the fact that these are deep learning networks. However, a trained DBNN is simply a feed-forward network with no internal memory, unlike Recurrent Neural Networks (RNNs) which are Turing complete and do posses internal memory, thus allowing them to make use of longer context. In this paper, an experiment is performed to make a hybrid of a DBNN with an advanced bidirectional RNN used to process its output. Results show that the use of the new DBNN-BLSTM hybrid as the acoustic model for the Large Vocabulary Continuous Speech Recognition (LVCSR) increases word recognition accuracy. However, the new model has many parameters and in some cases it may suffer performance issues in real-time applications.
Źródło:
Archives of Acoustics; 2015, 40, 2; 191-195
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Phase Autocorrelation Bark Wavelet Transform (PACWT) Features for Robust Speech Recognition
Autorzy:
Majeed, S. A.
Husain, H.
Samad, S. A.
Powiązania:
https://bibliotekanauki.pl/articles/177326.pdf
Data publikacji:
2015
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
speech recognition
feature extraction
phase autocorrelation
wavelet transform
Opis:
In this paper, a new feature-extraction method is proposed to achieve robustness of speech recognition systems. This method combines the benefits of phase autocorrelation (PAC) with bark wavelet transform. PAC uses the angle to measure correlation instead of the traditional autocorrelation measure, whereas the bark wavelet transform is a special type of wavelet transform that is particularly designed for speech signals. The extracted features from this combined method are called phase autocorrelation bark wavelet transform (PACWT) features. The speech recognition performance of the PACWT features is evaluated and compared to the conventional feature extraction method mel frequency cepstrum coefficients (MFCC) using TI-Digits database under different types of noise and noise levels. This database has been divided into male and female data. The result shows that the word recognition rate using the PACWT features for noisy male data (white noise at 0 dB SNR) is 60%, whereas it is 41.35% for the MFCC features under identical conditions.
Źródło:
Archives of Acoustics; 2015, 40, 1; 25-31
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Automatic disordered sound repetition recognition in continuous speech using CWT and kohonen network
Autorzy:
Codello, I.
Kuniszyk-Jóźkowiak, W.
Smołka, E.
Kobus, A.
Powiązania:
https://bibliotekanauki.pl/articles/106192.pdf
Data publikacji:
2012
Wydawca:
Uniwersytet Marii Curie-Skłodowskiej. Wydawnictwo Uniwersytetu Marii Curie-Skłodowskiej
Tematy:
speech recognition
speech disorders
sound repetition
Continuous Wavelet Transform
WaveBlaster
Opis:
Automatic disorders recognition in speech can be very helpful for a therapist while monitoring therapy progress of patients with disordered speech. This article is focused on sound repetitions. The signal is analyzed using Continuous Wavelet Transform with 16 bark scales. Using the silence finding algorithm, only speech fragments are automatically found and cut. Each cut fragment is converted into a fixed-length vector and passed into the Kohonen network. Finally, the Kohonen winning neuron result is put on the 3-layer perceptron. Most of the analysis was performer and the results were obtained using the authors’ program WaveBlaster. We use the STATISTICA package for finding the best perceptron which was then imported back into WaveBlaster and used for automatic blockades finding. The problem presented in this article is a part of our research work aimed at creating an automatic disordered speech recognition system.
Źródło:
Annales Universitatis Mariae Curie-Skłodowska. Sectio AI, Informatica; 2012, 12, 2; 39-48
1732-1360
2083-3628
Pojawia się w:
Annales Universitatis Mariae Curie-Skłodowska. Sectio AI, Informatica
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Evaluation of speech corpora for speech and speaker recognition systems
Wykorzystanie baz mowy do testowania systemów rozpoznawania mowy oraz mówcy
Autorzy:
Ślimok, J.
Kotas, J.
Powiązania:
https://bibliotekanauki.pl/articles/155955.pdf
Data publikacji:
2014
Wydawca:
Stowarzyszenie Inżynierów i Techników Mechaników Polskich
Tematy:
speech recognition
speech processing
speech corpora
rozpoznawanie mowy
przetwarzanie mowy
bazy mowy
Opis:
Creating advanced speech processing and speech recognition techniques involves the need of working with real voice samples. Access to various speech corpora is extremely helpful in such a situation. Having this type of resources available during the development process, it is possible to detect errors quicker, as well as estimate algorithm parameters better. Selecting a proper voice sample set is a key element in the development of a speech processing application. Each speech corpus has been adapted to support different aspects of speech processing. The goal of this paper is to present available speech corpora. Each of them is shown in the form of a table. The tables contain the description of features helpful in choosing a suitable set of voice samples.
Tworzenie zaawansowanych technik przetwarzania oraz rozpoznawania mowy wiąże się z koniecznością pracy z rzeczywistymi próbkami głosu. Dostęp do różnorodnych zbiorów sygnałów mowy jest w tej sytuacji niezwykle pomocny. Posiadając tego typu zasoby, możliwe jest szybsze wykrywanie błędów, jak również lepsze oszacowanie parametrów algorytmów. Celem niniejszego artukułu jest zaprezentowanie dostępnych zbiorów próbek głosu. Dostępne bazy mowy różnią się między sobą między innym jakością, warunkami nagrywania oraz możliwymi zastosowaniami. Część baz zawiera rejestrowane rozmowy telefoniczne, z kolei inne zawierają wypowiedzi zarejestrowane przy użyciu wielu mikrofonów wysokiej jakości. Wykorzystywanie publicznych baz danych ma jeszcze jedną ważną zaletę - umożliwia porównywanie algorytmów stworzonych przez różne ośrodki badawcze, wykorzystujące tę samą metodologię. Uzyskiwane wyniki są prezentowane w postaci benchmarków, co umożliwia szybkie porównywanie opracowanych rozwiązań. Z tego powodu, wybór odpowiedniej bazy mowy jest kluczowy z punktu widzenia skuteczności działania systemu. Każdy ze zbiorów został przedstawiony w formie tabeli. Tabele zawierają opis cech pomocnych podczas wyboru odpowiedniego zbioru próbek głosu.
Źródło:
Pomiary Automatyka Kontrola; 2014, R. 60, nr 6, 6; 373-375
0032-4140
Pojawia się w:
Pomiary Automatyka Kontrola
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Speech Recognition in an Enclosure with a Long Reverberation Time
Autorzy:
Kociński, J.
Ozimek, E.
Powiązania:
https://bibliotekanauki.pl/articles/177544.pdf
Data publikacji:
2016
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
speech intelligibility
speech recognition
sentence test
reverberation time
clarity
speech transmission index
Opis:
The aim of this work was to measure subjective speech intelligibility in an enclosure with a long reverberation time and comparison of these results with objective parameters. Impulse Responses (IRs) were first determined with a dummy head in different measurement points of the enclosure. The following objective parameters were calculated with Dirac 4.1 software: Reverberation Time (RT), Early Decay Time (EDT), weighted Clarity (C50) and Speech Transmission Index (STI). For the chosen measurement points, a convolution of the IRs with the Polish Sentence Test (PST) and logatome tests was made. PST was presented at a background of a babble noise and speech reception threshold – SRT (i.e. SNR yielding 50% speech intelligibility) for those points were evaluated. A relationship of the sentence and logatome recognition vs. STI was determined. It was found that the final SRT data are well correlated with speech transmission index (STI), and can be expressed by a psychometric function. The difference between SRT determined in condition without reverberation and in reverberation conditions appeared to be a good measure of the effect of reverberation on speech intelligibility in a room. In addition, speech intelligibility, with and without use of the sound amplification system installed in the enclosure, was compared.
Źródło:
Archives of Acoustics; 2016, 41, 2; 255-264
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Laughter Classification Using Deep Rectifier Neural Networks with a Minimal Feature Subset
Autorzy:
Gosztolya, G.
Beke, A.
Neuberger, T.
Tóth, L.
Powiązania:
https://bibliotekanauki.pl/articles/177910.pdf
Data publikacji:
2016
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
speech recognition
speech technology
computational paralinguistics
laughter detection
deep neural networks
Opis:
Laughter is one of the most important paralinguistic events, and it has specific roles in human conversation. The automatic detection of laughter occurrences in human speech can aid automatic speech recognition systems as well as some paralinguistic tasks such as emotion detection. In this study we apply Deep Neural Networks (DNN) for laughter detection, as this technology is nowadays considered state-of-the-art in similar tasks like phoneme identification. We carry out our experiments using two corpora containing spontaneous speech in two languages (Hungarian and English). Also, as we find it reasonable that not all frequency regions are required for efficient laughter detection, we will perform feature selection to find the sufficient feature subset.
Źródło:
Archives of Acoustics; 2016, 41, 4; 669-682
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł

Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies