Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "Automatic Speech Recognition" wg kryterium: Temat


Wyświetlanie 1-14 z 14
Tytuł:
Hybrid CNN-Ligru acoustic modeling using sincnet raw waveform for hindi ASR
Autorzy:
Kumar, Ankit
Aggarwal, Rajesh Kumar
Powiązania:
https://bibliotekanauki.pl/articles/1839250.pdf
Data publikacji:
2020
Wydawca:
Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie. Wydawnictwo AGH
Tematy:
automatic speech recognition
CNN
CNN-LiGRU
DNN
Opis:
Deep neural networks (DNN) currently play a most vital role in automatic speech recognition (ASR). The convolution neural network (CNN) and recurrent neural network (RNN) are advanced versions of DNN. They are right to deal with the spatial and temporal properties of a speech signal, and both properties have a higher impact on accuracy. With its raw speech signal, CNN shows its superiority over precomputed acoustic features. Recently, a novel first convolution layer named SincNet was proposed to increase interpretability and system performance. In this work, we propose to combine SincNet-CNN with a light-gated recurrent unit (LiGRU) to help reduce the computational load and increase interpretability with a high accuracy. Different configurations of the hybrid model are extensively examined to achieve this goal. All of the experiments were conducted using the Kaldi and Pytorch-Kaldi toolkit with the Hindi speech dataset. The proposed model reports an 8.0% word error rate (WER).
Źródło:
Computer Science; 2020, 21 (4); 397-417
1508-2806
2300-7036
Pojawia się w:
Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Two-Microphone Dereverberation for Automatic Speech Recognition of Polish
Autorzy:
Kundegorski, M.
Jackson, P. J. B.
Ziółko, B.
Powiązania:
https://bibliotekanauki.pl/articles/176431.pdf
Data publikacji:
2014
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
speech enhancement
reverberation
automatic speech recognition
ASR
Polish
Opis:
Reverberation is a common problem for many speech technologies, such as automatic speech recogni- tion (ASR) systems. This paper investigates the novel combination of precedence, binaural and statistical independence cues for enhancing reverberant speech, prior to ASR, under these adverse acoustical con- ditions when two microphone signals are available. Results of the enhancement are evaluated in terms of relevant signal measures and accuracy for both English and Polish ASR tasks. These show inconsistencies between the signal and recognition measures, although in recognition the proposed method consistently outperforms all other combinations and the spectral-subtraction baseline.
Źródło:
Archives of Acoustics; 2014, 39, 3; 411-420
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Recognition of the numbers in the Polish language
Autorzy:
Plichta, A.
Gąciarz, T.
Krzywdziński, T.
Powiązania:
https://bibliotekanauki.pl/articles/308844.pdf
Data publikacji:
2013
Wydawca:
Instytut Łączności - Państwowy Instytut Badawczy
Tematy:
Automatic Speech Recognition
compressed sensing
Sparse Classification
Opis:
Automatic Speech Recognition is one of the hottest research and application problems in today’s ICT technologies. Huge progress in the development of the intelligent mobile systems needs an implementation of the new services, where users can communicate with devices by sending audio commands. Those systems must be additionally integrated with the highly distributed infrastructures such as computational and mobile clouds, Wireless Sensor Networks (WSNs), and many others. This paper presents the recent research results for the recognition of the separate words and words in short contexts (limited to the numbers) articulated in the Polish language. Compressed Sensing Theory (CST) is applied for the first time as a methodology of speech recognition. The effectiveness of the proposed methodology is justified in numerical tests for both separate words and short sentences.
Źródło:
Journal of Telecommunications and Information Technology; 2013, 4; 70-78
1509-4553
1899-8852
Pojawia się w:
Journal of Telecommunications and Information Technology
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Estimation and tracking of fundamental, 2nd and 3d harmonic frequencies for spectrogram normalization in speech recognition
Autorzy:
Fujimoto, K.
Hamada, N.
Kasprzak, W.
Powiązania:
https://bibliotekanauki.pl/articles/201105.pdf
Data publikacji:
2012
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
automatic speech recognition
spectrogram analysis
particle filter
pitch estimation
Opis:
A stable and accurate estimation of the fundamental frequency (pitch, F0) is an important requirement in speech and music signal analysis, in tasks like automatic speech recognition and extraction of target signal in noisy environment. In this paper, we propose a pitch-related spectrogram normalization scheme to improve the speaker – independency of standard speech features. A very accurate estimation of the fundamental frequency is a must. Hence, we develop a non-parametric recursive estimation method of F0 and its 2nd and 3d harmonic frequencies in noisy circumstances. The proposed method is different from typical Kalman and particle filter methods in the way that no particular sum of sinusoidal model is used. Also we tend to estimate F0 and its lower harmonics by using novel likelihood function. Through experiments under various noise levels, the proposed method is proved to be more accurate than other conventional methods. The spectrogram normalization scheme makes a mapping of real harmonic structure to a normalized structure. Results obtained for voiced phonemes show an increase in stability of the standard speech features – the average within-phoneme distance of the MFCC features for voiced phonemes can be decreased by several percent.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2012, 60, 1; 71-81
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Application of automatic speech recognition to medical reports spoken in Polish
Autorzy:
Hnatkowska, B.
Sas, J.
Powiązania:
https://bibliotekanauki.pl/articles/333379.pdf
Data publikacji:
2008
Wydawca:
Uniwersytet Śląski. Wydział Informatyki i Nauki o Materiałach. Instytut Informatyki. Zakład Systemów Komputerowych
Tematy:
systemy informacji medycznej
modele językowe
automatic speech recognition
hospital information systems
language models
Opis:
The paper presents an attempt to automatic speech recognition of Polish spoken medical texts. The attempt resulted in experimental system that can be used as a tool for practical applications. The system uses a typical recognition method based on Hidden Markov Model and domain-specific language model. Implemented software made it possible to conduct many experiments aimed on evaluation of the assumed approach usefulness. Obtained experiment results are presented and analyzed. The system architecture and the way in which it can be integrated with hospital information systems is also exposed.
Źródło:
Journal of Medical Informatics & Technologies; 2008, 12; 223-229
1642-6037
Pojawia się w:
Journal of Medical Informatics & Technologies
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Preliminary Evaluation of Convolutional Neural Network Acoustic Model for Iban Language Using NVIDIA NeMo
Autorzy:
Michael, Steve Olsen
Juan, Sarah Samson
Mit, Edwin
Powiązania:
https://bibliotekanauki.pl/articles/2058507.pdf
Data publikacji:
2022
Wydawca:
Instytut Łączności - Państwowy Instytut Badawczy
Tematy:
acoustic modeling
automatic speech recognition
convolutional neural network
CNN
under-resourced language
NVIDIA NeMo
Opis:
For the past few years, artificial neural networks (ANNs) have been one of the most common solutions relied upon while developing automated speech recognition (ASR) acoustic models. There are several variants of ANNs, such as deep neural networks (DNNs), recurrent neural networks (RNNs), and convolutional neural networks (CNNs). A CNN model is widely used as a method for improving image processing performance. In recent years, CNNs have also been utilized in ASR techniques, and this paper investigates the preliminary result of an end-to-end CNN-based ASR using NVIDIA NeMo on the Iban corpus, an under-resourced language. Studies have shown that CNNs have also managed to produce excellent word error (WER) rates for the acoustic model on ASR for speech data. Conversely, results and studies concerned with under-resourced languages remain unsatisfactory. Hence, by using NVIDIA NeMo, a new ASR engine developed by NVIDIA, the viability and the potential of this alternative approach are evaluated in this paper. Two experiments were conducted: the number of resources used in the works of our ASR’s training was manipulated, as was the internal parameter of the engine used, namely the epochs. The results of those experiments are then analyzed and compared with the results shown in existing papers.
Źródło:
Journal of Telecommunications and Information Technology; 2022, 1; 43--53
1509-4553
1899-8852
Pojawia się w:
Journal of Telecommunications and Information Technology
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
An Effective Speaker Clustering Method using UBM and Ultra-Short Training Utterances
Autorzy:
Hossa, R.
Makowski, R.
Powiązania:
https://bibliotekanauki.pl/articles/176593.pdf
Data publikacji:
2016
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
automatic speech recognition
interindividual difference compensation
speaker clustering
universal background model
GMM weighting factor adaptation
Opis:
The same speech sounds (phones) produced by different speakers can sometimes exhibit significant differences. Therefore, it is essential to use algorithms compensating these differences in ASR systems. Speaker clustering is an attractive solution to the compensation problem, as it does not require long utterances or high computational effort at the recognition stage. The report proposes a clustering method based solely on adaptation of UBM model weights. This solution has turned out to be effective even when using a very short utterance. The obtained improvement of frame recognition quality measured by means of frame error rate is over 5%. It is noteworthy that this improvement concerns all vowels, even though the clustering discussed in this report was based only on the phoneme a. This indicates a strong correlation between the articulation of different vowels, which is probably related to the size of the vocal tract.
Źródło:
Archives of Acoustics; 2016, 41, 1; 107-118
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Building compact language models for medical speech recognition in mobile devices with limited amount of memory
Autorzy:
Sas, J.
Powiązania:
https://bibliotekanauki.pl/articles/332971.pdf
Data publikacji:
2012
Wydawca:
Uniwersytet Śląski. Wydział Informatyki i Nauki o Materiałach. Instytut Informatyki. Zakład Systemów Komputerowych
Tematy:
automatyczne rozpoznawanie mowy
medyczne systemy informacyjne
modelowanie języka
automatic speech recognition
medical information systems
language modeling
Opis:
The article presents the method of building compact language model for speech recognition in devices with limited amount of memory. Most popularly used bigram word-based language models allow for highly accurate speech recognition but need large amount of memory to store, mainly due to the big number of word bigrams. The method proposed here ranks bigrams according to their importance in speech recognition and replaces explicit estimation of less important bigrams probabilities by probabilities derived from the class-based model. The class-based model is created by assigning words appearing in the corpus to classes corresponding to syntactic properties of words. The classes represent various combinations of part of speech inflectional features like number, case, tense, person etc. In order to maximally reduce the amount of memory necessary to store class-based model, a method that reduces the number of part-of-speech classes has been applied, that merges the classes appearing in stochastically similar contexts in the corpus. The experiments carried out with selected domains of medical speech show that the method allows for 75% reduction of model size without significant loss of speech recognition accuracy.
Źródło:
Journal of Medical Informatics & Technologies; 2012, 20; 111-119
1642-6037
Pojawia się w:
Journal of Medical Informatics & Technologies
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Optimal spoken dialog control in hands-free medical information systems
Autorzy:
Sas, J.
Powiązania:
https://bibliotekanauki.pl/articles/333081.pdf
Data publikacji:
2009
Wydawca:
Uniwersytet Śląski. Wydział Informatyki i Nauki o Materiałach. Instytut Informatyki. Zakład Systemów Komputerowych
Tematy:
rozpoznawanie mowy automatyczne
optymalizacja genetyczna
systemy informacji medycznej
automatic speech recognition
genetic optimization
medical information systems
Opis:
In the paper a method of optimal selection of utterances used as command entry-words for voice controlled application is presented. Voice controlled programs seem to be particularly useful in the area of medical informatics, where a physician interacts with a program by voice while operating the medical device or being involved in examinations requiring manual activities. The proposed method selects command words from sets of proposals defined for each command so as to minimize the overall probability of incorrect command recognition. First the entry-word dissimilarity matrix is calculated. The word dissimilarities are evaluated using HMM models consisting of appropriately trained acoustic models of the phonemes constituting words. The trained HMM is used as the sample utterance generator for the word. The artificially created utterance samples are then recognized by speech recognizers created for pairs of words. The estimation of correct recognition probability is used as the word dissimilarity measure. The word dissimilarities are then used to determine the average assessment of words selections that can be used as commands. Selection is created by choosing single word from sets of candidates defined for each command. Finally, suboptimal selection is found by using genetic algorithm. Experiments carried out prove that suboptimal selection of command entry-words can observably increase the accuracy of spoken commands recognition in many cases.
Źródło:
Journal of Medical Informatics & Technologies; 2009, 13; 113-120
1642-6037
Pojawia się w:
Journal of Medical Informatics & Technologies
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Pipelined language model construction for Polish speech recognition
Autorzy:
Sas, J.
Żołnierek, A.
Powiązania:
https://bibliotekanauki.pl/articles/329841.pdf
Data publikacji:
2013
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
automatic speech recognition
hidden Markov model
adaptive language model
automatyczne rozpoznawanie mowy
model Markova ukryty
model językowy adaptacyjny
Opis:
The aim of works described in this article is to elaborate and experimentally evaluate a consistent method of Language Model (LM) construction for the sake of Polish speech recognition. In the proposed method we tried to take into account the features and specific problems experienced in practical applications of speech recognition in the Polish language, reach inflection, a loose word order and the tendency for short word deletion. The LM is created in five stages. Each successive stage takes the model prepared at the previous stage and modifies or extends it so as to improve its properties. At the first stage, typical methods of LM smoothing are used to create the initial model. Four most frequently used methods of LM construction are here. At the second stage the model is extended in order to take into account words indirectly co-occurring in the corpus. At the next stage, LM modifications are aimed at reduction of short word deletion errors, which occur frequently in Polish speech recognition. The fourth stage extends the model by insertion of words that were not observed in the corpus. Finally the model is modified so as to assure highly accurate recognition of very important utterances. The performance of the methods applied is tested in four language domains.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2013, 23, 3; 649-668
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Automatic prolongation recognition in disordered speech using CWT and Kohonen network
Autorzy:
Codello, I.
Kuniszyk-Jóźkowiak, W.
Smołka, E.
Kobus, A.
Powiązania:
https://bibliotekanauki.pl/articles/332965.pdf
Data publikacji:
2012
Wydawca:
Uniwersytet Śląski. Wydział Informatyki i Nauki o Materiałach. Instytut Informatyki. Zakład Systemów Komputerowych
Tematy:
sieć Kohonena
zaburzenia automatycznego rozpoznawania mowy
ciągła transformata falkowa
skala Barka
wydłużenie mowy
Kohonen network
automatic disorders speech recognition
waveblaster
CWT
continuous wavelet transform (CWT)
Bark scale
speech prolongations
Opis:
Automatic disorder recognition in speech can be very helpful for the therapist while monitoring therapy progress of the patients with disordered speech. In this article we focus on prolongations. We analyze the signal using Continuous Wavelet Transform with 18 bark scales, we divide the result into vectors (using windowing) and then we pass such vectors into Kohonen network. Quite large search analysis was performed (5 variables were checked) during which, recognition above 90% was achieved. All the analysis was performed and the results were obtained using the authors' program - "WaveBlaster". It is very important that the recognition ratio above 90% was obtained by a fully automatic algorithm (without a teacher) from the continuous speech. The presented problem is part of our research aimed at creating an automatic prolongation recognition system.
Źródło:
Journal of Medical Informatics & Technologies; 2012, 20; 137-144
1642-6037
Pojawia się w:
Journal of Medical Informatics & Technologies
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Disordered sound repetition recognition in continuous speech using CWT and Kohonen network
Autorzy:
Codello, I.
Kuniszyk-Jóźkowiak, W.
Smołka, E.
Kobus, A.
Powiązania:
https://bibliotekanauki.pl/articles/333359.pdf
Data publikacji:
2011
Wydawca:
Uniwersytet Śląski. Wydział Informatyki i Nauki o Materiałach. Instytut Informatyki. Zakład Systemów Komputerowych
Tematy:
sieć Kohonena
zaburzenia automatycznego rozpoznawania mowy
ciągła transformata falkowa
skala Barka
powtarzanie dźwięku
Kohonen network
automatic disorders speech recognition
waveblaster
CWT
continuous wavelet transform (CWT)
Bark scale
sound repetition
Opis:
Automatic disorders recognition in speech can be very helpful for therapist while monitoring therapy progress of patients with disordered speech. This article is focused on sound repetitions. The signal is analyzed using Continuous Wavelet Transform with 16 bark scales, the result is divided into vectors and passed into Kohonen network. Finally, the Kohonen winning neuron result is put on the 3-layer perceptron. The recognition ratio was increased by about 20% by adding a modification into the Kohonen network training process as well as into CWT computation algorithm. All the analysis was performed and the results were obtained using the authors' program ”WaveBlaster“, The problem presented in this article is a part of our research work aimed at creating an automatic disordered speech recognition system.
Źródło:
Journal of Medical Informatics & Technologies; 2011, 17; 123-130
1642-6037
Pojawia się w:
Journal of Medical Informatics & Technologies
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Recognition of speaker’s age group and gender for a large database of telephone-recorded voices
Autorzy:
Staroniewicz, Piotr
Powiązania:
https://bibliotekanauki.pl/articles/2202432.pdf
Data publikacji:
2022
Wydawca:
Politechnika Poznańska. Instytut Mechaniki Stosowanej
Tematy:
speech processing
automatic age recognition
przetwarzanie mowy
automatyczne rozpoznawanie wieku
Opis:
The paper presents the results of the automatic recognition of age group and gender of speakers performed for the large SpeechDAT(E) acoustic database for the Polish language, containing recordings of 1000 speakers (486 males/514 females) aged 12 to 73, recorded in telephone conditions. Three age groups were recognised for each gender. Mel Frequency Cepstral Coefficients (MFCC) were used to describe the recognized signals parametrically. Among the classification methods tested in this study, the best results were obtained for the SVM (Support Vector Machines) method.
Źródło:
Vibrations in Physical Systems; 2022, 33, 2; art. no. 2022203
0860-6897
Pojawia się w:
Vibrations in Physical Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Behavioral features of the speech signal as part of improving the effectiveness of the automatic speaker recognition system
Autorzy:
Mały, Dominik
Dobrowolski, Andrzej
Powiązania:
https://bibliotekanauki.pl/articles/27323689.pdf
Data publikacji:
2023
Wydawca:
Centrum Rzeczoznawstwa Budowlanego Sp. z o.o.
Tematy:
automatic speaker recognition
automatic speaker recognition systems
physical features
behavioral features
speech signal
automatyczne rozpoznawanie mówiącego
sygnał mowy
system automatycznego rozpoznawania mówiącego
cecha behawioralna
cecha fizyczna
Opis:
The current reality is saturated with intelligent telecommunications solutions, and automatic speaker recognition systems are an integral part of many of them. They are widely used in sectors such as banking, telecommunications and forensics. The ease of performing automatic analysis and efficient extraction of the distinctive characteristics of the human voice makes it possible to identify, verify, as well as authorize the speaker under investigation. Currently, the vast majority of solutions in the field of speaker recognition systems are based on the distinctive features resulting from the structure of the speaker's vocal tract (laryngeal sound analysis), called physical features of the voice. Despite the high efficiency of such systems - oscillating at more than 95% - their further development is already very difficult, due to the fact that the possibilities of distinctive physical features have been exhausted. Further opportunities to increase the effectiveness of ASR systems based on physical features appear after additional consideration of the behavioral features of the speech signal in the system, which is the subject of this article.
Źródło:
Inżynieria Bezpieczeństwa Obiektów Antropogenicznych; 2023, 4; 26--34
2450-1859
2450-8721
Pojawia się w:
Inżynieria Bezpieczeństwa Obiektów Antropogenicznych
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-14 z 14

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies