Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "feature extraction" wg kryterium: Temat


Wyświetlanie 1-7 z 7
Tytuł:
Phase Autocorrelation Bark Wavelet Transform (PACWT) Features for Robust Speech Recognition
Autorzy:
Majeed, S. A.
Husain, H.
Samad, S. A.
Powiązania:
https://bibliotekanauki.pl/articles/177326.pdf
Data publikacji:
2015
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
speech recognition
feature extraction
phase autocorrelation
wavelet transform
Opis:
In this paper, a new feature-extraction method is proposed to achieve robustness of speech recognition systems. This method combines the benefits of phase autocorrelation (PAC) with bark wavelet transform. PAC uses the angle to measure correlation instead of the traditional autocorrelation measure, whereas the bark wavelet transform is a special type of wavelet transform that is particularly designed for speech signals. The extracted features from this combined method are called phase autocorrelation bark wavelet transform (PACWT) features. The speech recognition performance of the PACWT features is evaluated and compared to the conventional feature extraction method mel frequency cepstrum coefficients (MFCC) using TI-Digits database under different types of noise and noise levels. This database has been divided into male and female data. The result shows that the word recognition rate using the PACWT features for noisy male data (white noise at 0 dB SNR) is 60%, whereas it is 41.35% for the MFCC features under identical conditions.
Źródło:
Archives of Acoustics; 2015, 40, 1; 25-31
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A Classification Method Related to Respiratory Disorder Events Based on Acoustical Analysis of Snoring
Autorzy:
Wang, Can
Peng, Jianxin
Zhang, Xiaowen
Powiązania:
https://bibliotekanauki.pl/articles/176601.pdf
Data publikacji:
2020
Wydawca:
Polska Akademia Nauk. Czasopisma i Monografie PAN
Tematy:
acoustical analysis
feature extraction
support vector machine
snoring sound
Opis:
Acoustical analysis of snoring provides a new approach for the diagnosis of obstructive sleep apnea hypopnea syndrome (OSAHS). A classification method is presented based on respiratory disorder events to predict the apnea-hypopnea index (AHI) of OSAHS patients. The acoustical features of snoring were extracted from a full night’s recording of 6 OSAHS patients, and regular snoring sounds and snoring sounds related to respiratory disorder events were classified using a support vector machine (SVM) method. The mean recognition rate for simple snoring sounds and snoring sounds related to respiratory disorder events is more than 91.14% by using the grid search, a genetic algorithm and particle swarm optimization methods. The predicted AHI from the present study has a high correlation with the AHI from polysomnography and the correlation coefficient is 0.976. These results demonstrate that the proposed method can classify the snoring sounds of OSAHS patients and can be used to provide guidance for diagnosis of OSAHS.
Źródło:
Archives of Acoustics; 2020, 45, 1; 141-151
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A Fast Method of Feature Extraction for Lowering Vehicle Pass-By Noise Based on Nonnegative Tucker3 Decomposition
Autorzy:
Wang, H.
Cheng, G.
Deng, G.
Li, X.
Li, H.
Huang, Y.
Powiązania:
https://bibliotekanauki.pl/articles/177883.pdf
Data publikacji:
2017
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
vehicle pass-by noise
NTD
feature extraction
sound pressure level
Opis:
Usually, the judgement of one type fault of vehicle pass-by noise is difficult for engineers, especially when some significant features are disturbed by other interference noise, such as the squealing noise is almost simultaneous with the whistle in the exhaust system. In order to cope with this problem, a new method, with the antinoise ability of the algorithm on the condition by which the features are entangled, is developed to extract clear features for the fault analysis. In the proposed method, the nonnegative Tucker3 decomposition (NTD) with fast updating algorithm, signed as NTD_FUP, can find out the natural frequency of the parts/components from the exhaust system. Not only does the NTD_FUP extract clear features from the confused noise, but also it is superior to the traditional methods in practice. Then, an aluminium-foil alloy material, which is used for the heat shield for its lower noise radiation, replaces the aluminium alloy alone. Extensive experiments show that the sound pressure level of the vehicle pass-by noise is reduced 0.9 dB(A) by the improved heat shield, which is also considered as a more lightweight design for the exhaust system of an automobile.
Źródło:
Archives of Acoustics; 2017, 42, 4; 619-629
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Enhancement in Bearing Fault Classification Parameters Using Gaussian Mixture Models and Mel Frequency Cepstral Coefficients Features
Autorzy:
Atmani, Youcef
Rechak, Said
Mesloub, Ammar
Hemmouche, Larbi
Powiązania:
https://bibliotekanauki.pl/articles/177335.pdf
Data publikacji:
2020
Wydawca:
Polska Akademia Nauk. Czasopisma i Monografie PAN
Tematy:
bearing faults
Gaussian mixture models
Mel frequency cepstral coefficients
feature extraction
diagnosis
Opis:
Last decades, rolling bearing faults assessment and their evolution with time have been receiving much interest due to their crucial role as part of the Conditional Based Maintenance (CBM) of rotating machinery. This paper investigates bearing faults diagnosis based on classification approach using Gaussian Mixture Model (GMM) and the Mel Frequency Cepstral Coefficients (MFCC) features. Throughout, only one criterion is defined for the evaluation of the performance during all the cycle of the classification process. This is the Average Classification Rate (ACR) obtained from the confusion matrix. In every test performed, the generated features vectors are considered along to discriminate between four fault conditions as normal bearings, bearings with inner and outer race faults and ball faults. Many configurations were tested in order to determinate the optimal values of input parameters, as the frame analysis length, the order of model, and others. The experimental application of the proposed method was based on vibration signals taken from the bearing datacenter website of Case Western Reserve University (CWRU). Results show that proposed method can reliably classify different fault conditions and have a highest classification performance under some conditions.
Źródło:
Archives of Acoustics; 2020, 45, 2; 283-295
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Hybridisation of Mel Frequency Cepstral Coefficient and Higher Order Spectral Features for Musical Instruments Classification
Autorzy:
Bhalke, D. G.
Rama Rao, C. B.
Bormane, D.
Powiązania:
https://bibliotekanauki.pl/articles/176497.pdf
Data publikacji:
2016
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
feature extraction
MFCC
HOS
bispectrum
bicoherence
non-linearity
non-Gaussianity
CPNN
zero crossing rate (ZCR)
Opis:
This paper presents the classification of musical instruments using Mel Frequency Cepstral Coefficients (MFCC) and Higher Order Spectral features. MFCC, cepstral, temporal, spectral, and timbral features have been widely used in the task of musical instrument classification. As music sound signal is generated using non-linear dynamics, non-linearity and non-Gaussianity of the musical instruments are important features which have not been considered in the past. In this paper, hybridisation of MFCC and Higher Order Spectral (HOS) based features have been used in the task of musical instrument classification. HOS-based features have been used to provide instrument specific information such as non-Gaussianity and non-linearity of the musical instruments. The extracted features have been presented to Counter Propagation Neural Network (CPNN) to identify the instruments and their family. For experimentation, isolated sounds of 19 musical instruments have been used from McGill University Master Sample (MUMS) sound database. The proposed features show the significant improvement in the classification accuracy of the system.
Źródło:
Archives of Acoustics; 2016, 41, 3; 427-436
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Marine Mammals Classification using Acoustic Binary Patterns
Autorzy:
Nadir, Maheen
Adnan, Syed M.
Aziz, Sumair
Khan, Muhammad Umar
Powiązania:
https://bibliotekanauki.pl/articles/1953520.pdf
Data publikacji:
2020
Wydawca:
Polska Akademia Nauk. Czasopisma i Monografie PAN
Tematy:
marine mammals
1D Local Binary Patterns
Mel frequency cepstral coefficients
feature extraction
passive acoustic monitoring
Opis:
Marine mammal identification and classification for passive acoustic monitoring remain a challenging task. Mainly the interspecific and intraspecific variations in calls within species and among different individuals of single species make it more challenging. Varieties of species along with geographical diversity induce more complications towards an accurate analysis of marine mammal classification using acoustic signatures. Prior methods for classification focused on spectral features which result in increasing bias for contour base classifiers in automatic detection algorithms. In this study, acoustic marine mammal classification is performed through the fusion of 1D Local Binary Pattern (1D-LBP) and Mel Frequency Cepstral Coefficient (MFCC) based features. Multi-class Support Vector Machines (SVM) classifier is employed to identify different classes of mammal sounds. Classification of six species named Tursiops truncatus, Delphinus delphis, Peponocephala electra, Grampus griseus, Stenella longirostris, and Stenella attenuate are targeted in this research. The proposed model achieved 90.4% accuracy on 70-30% training testing and 89.6% on 5-fold cross-validation experiments.
Źródło:
Archives of Acoustics; 2020, 45, 4; 721-731
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Automatic Genre Classification Using Fractional Fourier Transform Based Mel Frequency Cepstral Coefficient and Timbral Features
Autorzy:
Bhalke, D. G.
Rajesh, B.
Bormane, D. S.
Powiązania:
https://bibliotekanauki.pl/articles/177599.pdf
Data publikacji:
2017
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
feature extraction
Timbral features
MFCC
Mel Frequency Cepstral Coefficient
FrFT
fractional Fourier transform
Fractional MFCC
Tamil Carnatic music
Opis:
This paper presents the Automatic Genre Classification of Indian Tamil Music and Western Music using Timbral and Fractional Fourier Transform (FrFT) based Mel Frequency Cepstral Coefficient (MFCC) features. The classifier model for the proposed system has been built using K-NN (K-Nearest Neighbours) and Support Vector Machine (SVM). In this work, the performance of various features extracted from music excerpts has been analysed, to identify the appropriate feature descriptors for the two major genres of Indian Tamil music, namely Classical music (Carnatic based devotional hymn compositions) & Folk music and for western genres of Rock and Classical music from the GTZAN dataset. The results for Tamil music have shown that the feature combination of Spectral Roll off, Spectral Flux, Spectral Skewness and Spectral Kurtosis, combined with Fractional MFCC features, outperforms all other feature combinations, to yield a higher classification accuracy of 96.05%, as compared to the accuracy of 84.21% with conventional MFCC. It has also been observed that the FrFT based MFCC effieciently classifies the two western genres of Rock and Classical music from the GTZAN dataset with a higher classification accuracy of 96.25% as compared to the classification accuracy of 80% with MFCC.
Źródło:
Archives of Acoustics; 2017, 42, 2; 213-222
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
    Wyświetlanie 1-7 z 7

    Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies