Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "emotion recognition" wg kryterium: Temat


Tytuł:
The relationship between Trait Emotional Intelligence and emotion recognition in the context of COVID-19 pandemic
Autorzy:
Cannavò, Marco
Barberis, Nadia
Larcan, Rosalba
Cuzzocrea, Francesca
Powiązania:
https://bibliotekanauki.pl/articles/2121465.pdf
Data publikacji:
2022
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
COVID-19
Trait EI
emotion recognition
Opis:
Covid-19 pandemic is severely impacting worldwide. A line of research warned that facial occlusion may impair facial emotion recognition, whilst prior research highlighted the role of Trait Emotional Intelligence in the recognition of non-verbal social stimuli. The sample consisted of 102 emerging adults, aged 18-24 (M = 20.76; SD = 2.10; 84% females, 16% males) and were asked to recognize four different emotions (happiness, fear, anger, and sadness) in fully visible faces and in faces wearing a mask and to complete a questionnaire assessing Trait Emotional Intelligence. Results highlighted that individuals displayed lower accuracy in detecting happiness and fear in covered faces, while also being more inaccurate in reporting correct answers. The results show that subjects provide more correct answers when the photos show people without a mask than when they are wearing it. In addition, participants give more wrong answers when there are subjects wearing masks in the photos than when they are not wearing it. In addition, participants provide more correct answers regarding happiness and sadness when in the photos the subjects are not wearing the mask, compared to when they are wearing it. Implications are discussed.
Źródło:
Polish Psychological Bulletin; 2022, 53, 1; 15-22
0079-2993
Pojawia się w:
Polish Psychological Bulletin
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Analysis of Features and Classifiers in Emotion Recognition Systems : Case Study of Slavic Languages
Autorzy:
Nedeljković, Željko
Milošević, Milana
Ðurović, Željko
Powiązania:
https://bibliotekanauki.pl/articles/176678.pdf
Data publikacji:
2020
Wydawca:
Polska Akademia Nauk. Czasopisma i Monografie PAN
Tematy:
emotion recognition
speech processing
classification algorithms
Opis:
Today’s human-computer interaction systems have a broad variety of applications in which automatic human emotion recognition is of great interest. Literature contains many different, more or less successful forms of these systems. This work emerged as an attempt to clarify which speech features are the most informative, which classification structure is the most convenient for this type of tasks, and the degree to which the results are influenced by database size, quality and cultural characteristic of a language. The research is presented as the case study on Slavic languages.
Źródło:
Archives of Acoustics; 2020, 45, 1; 129-140
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
The Acoustic Cues of Fear : Investigation of Acoustic Parameters of Speech Containing Fear
Autorzy:
Özseven, T.
Powiązania:
https://bibliotekanauki.pl/articles/178133.pdf
Data publikacji:
2018
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
emotion recognition
acoustic analysis
fear
speech processing
Opis:
Speech emotion recognition is an important part of human-machine interaction studies. The acoustic analysis method is used for emotion recognition through speech. An emotion does not cause changes on all acoustic parameters. Rather, the acoustic parameters affected by emotion vary depending on the emotion type. In this context, the emotion-based variability of acoustic parameters is still a current field of study. The purpose of this study is to investigate the acoustic parameters that fear affects and the extent of their influence. For this purpose, various acoustic parameters were obtained from speech records containing fear and neutral emotions. The change according to the emotional states of these parameters was analyzed using statistical methods, and the parameters and the degree of influence that the fear emotion affected were determined. According to the results obtained, the majority of acoustic parameters that fear affects vary according to the used data. However, it has been demonstrated that formant frequencies, mel-frequency cepstral coefficients, and jitter parameters can define the fear emotion independent of the data used.
Źródło:
Archives of Acoustics; 2018, 43, 2; 245-251
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Speech emotion recognition under white noise
Autorzy:
Huang, C.
Chen, G.
Yu, H.
Bao, Y.
Zhao, L.
Powiązania:
https://bibliotekanauki.pl/articles/177301.pdf
Data publikacji:
2013
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
speech emotion recognition
speech enhancement
emotion model
Gaussian mixture model
Opis:
Speaker‘s emotional states are recognized from speech signal with Additive white Gaussian noise (AWGN). The influence of white noise on a typical emotion recogniztion system is studied. The emotion classifier is implemented with Gaussian mixture model (GMM). A Chinese speech emotion database is used for training and testing, which includes nine emotion classes (e.g. happiness, sadness, anger, surprise, fear, anxiety, hesitation, confidence and neutral state). Two speech enhancement algorithms are introduced for improved emotion classification. In the experiments, the Gaussian mixture model is trained on the clean speech data, while tested under AWGN with various signal to noise ratios (SNRs). The emotion class model and the dimension space model are both adopted for the evaluation of the emotion recognition system. Regarding the emotion class model, the nine emotion classes are classified. Considering the dimension space model, the arousal dimension and the valence dimension are classified into positive regions or negative regions. The experimental results show that the speech enhancement algorithms constantly improve the performance of our emotion recognition system under various SNRs, and the positive emotions are more likely to be miss-classified as negative emotions under white noise environment.
Źródło:
Archives of Acoustics; 2013, 38, 4; 457-463
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Communication atmosphere in humans and robots interaction based on the concept of fuzzy atmosfield generated by emotional states of humans and robots
Autorzy:
Liu, Z. T.
Chen, L. F.
Dong, F. Y.
Hirota, K.
Min, W.
Li, D. Y.
Yamazaki, Y.
Powiązania:
https://bibliotekanauki.pl/articles/384920.pdf
Data publikacji:
2013
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
human-robot interaction
communication atmosphere
fuzzy logic
emotion recognition
Opis:
Communication atmosphere based on emotional states of humans and robots is modeled by using Fuzzy Atmosfield (FA), where the human emotion is estimated from bimodal communication cues (i.e., speech and gesture) using weighted fusion and fuzzy logic, and the robot emotion is generated by emotional expression synthesis. It makes possible to quantitatively express overall affective expression of individuals, and helps to facilitate smooth communication in humans-robots interaction. Experiments in a household environment are performed by four humans and five eye robots, where emotion recognition of humans based on bimodal cues achieves 84% accuracy in average, improved by about 10% compared to that using only speech. Experimental results from the model of communication atmosphere based on the FA are evaluated by comparing with questionnaire surveys, from which the maximum error of 0.25 and the minimum correlation coefficient of 0.72 for three axes in the FA confirm the validity of the proposal. In ongoing work, an atmosphere representation system is being planned for casual communication between humans and robots, taking into account multiple emotional modalities such as speech, gesture, and music.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2013, 7, 2; 52-63
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Multi-objective heuristic feature selection for speech-based multilingual emotion recognition
Autorzy:
Brester, C.
Semenkin, E.
Sidorov, M.
Powiązania:
https://bibliotekanauki.pl/articles/91588.pdf
Data publikacji:
2016
Wydawca:
Społeczna Akademia Nauk w Łodzi. Polskie Towarzystwo Sieci Neuronowych
Tematy:
multi-objective optimization
feature selection
speech-based emotion recognition
Opis:
If conventional feature selection methods do not show sufficient effectiveness, alternative algorithmic schemes might be used. In this paper we propose an evolutionary feature selection technique based on the two-criterion optimization model. To diminish the drawbacks of genetic algorithms, which are applied as optimizers, we design a parallel multicriteria heuristic procedure based on an island model. The performance of the proposed approach was investigated on the Speech-based Emotion Recognition Problem, which reflects one of the most essential points in the sphere of human-machine communications. A number of multilingual corpora (German, English and Japanese) were involved in the experiments. According to the results obtained, a high level of emotion recognition was achieved (up to a 12.97% relative improvement compared with the best F-score value on the full set of attributes).
Źródło:
Journal of Artificial Intelligence and Soft Computing Research; 2016, 6, 4; 243-253
2083-2567
2449-6499
Pojawia się w:
Journal of Artificial Intelligence and Soft Computing Research
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Emotion monitoring – verification of physiological characteristics measurement procedures
Autorzy:
Landowska, A.
Powiązania:
https://bibliotekanauki.pl/articles/220577.pdf
Data publikacji:
2014
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
affective computing
emotion recognition
physiology
motion artifacts
sensor location
Opis:
This paper concerns measurement procedures on an emotion monitoring stand designed for tracking human emotions in the Human-Computer Interaction with physiological characteristics. The paper addresses the key problem of physiological measurements being disturbed by a motion typical for human-computer interaction such as keyboard typing or mouse movements. An original experiment is described, that aimed at practical evaluation of measurement procedures performed at the emotion monitoring stand constructed at GUT. Different locations of sensors were considered and evaluated for suitability and measurement precision in the Human- Computer Interaction monitoring. Alternative locations (ear lobes and forearms) for skin conductance, blood volume pulse and temperature sensors were proposed and verified. Alternative locations proved correlation with traditional locations as well as lower sensitiveness to movements like typing or mouse moving, therefore they can make a better solution for monitoring the Human-Computer Interaction.
Źródło:
Metrology and Measurement Systems; 2014, 21, 4; 719-732
0860-8229
Pojawia się w:
Metrology and Measurement Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Acoustic Methods in Identifying Symptoms of Emotional States
Autorzy:
Piątek, Zuzanna
Kłaczyński, Maciej
Powiązania:
https://bibliotekanauki.pl/articles/1953482.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czasopisma i Monografie PAN
Tematy:
emotion recognition
speech signal processing
clustering analysis
Sammon mapping
Opis:
The study investigates the use of speech signal to recognise speakers’ emotional states. The introduction includes the definition and categorization of emotions, including facial expressions, speech and physiological signals. For the purpose of this work, a proprietary resource of emotionally-marked speech recordings was created. The collected recordings come from the media, including live journalistic broadcasts, which show spontaneous emotional reactions to real-time stimuli. For the purpose of signal speech analysis, a specific script was written in Python. Its algorithm includes the parameterization of speech recordings and determination of features correlated with emotional content in speech. After the parametrization process, data clustering was performed to allows for the grouping of feature vectors for speakers into greater collections which imitate specific emotional states. Using the t-Student test for dependent samples, some descriptors were distinguished, which identified significant differences in the values of features between emotional states. Some potential applications for this research were proposed, as well as other development directions for future studies of the topic.
Źródło:
Archives of Acoustics; 2021, 46, 2; 259-269
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Speech emotion recognition system for social robots
Autorzy:
Juszkiewicz, Ł.
Powiązania:
https://bibliotekanauki.pl/articles/384511.pdf
Data publikacji:
2013
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
speech emotion recognition
prosody
machine learning
Emo-DB
intonation
social robot
Opis:
The paper presents a speech emotion recognition system for social robots. Emotions are recognised using global acoustic features of the speech. The system implements the speech parameters calculation, features extraction, features selection and classification. All these phases are described. The system was verified using the two emotional speech databases: Polish and German. Perspectives for using such system in the social robots are presented.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2013, 7, 4; 59-65
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Speech Emotion Recognition Based on Voice Fundamental Frequency
Autorzy:
Dimitrova-Grekow, Teodora
Klis, Aneta
Igras-Cybulska, Magdalena
Powiązania:
https://bibliotekanauki.pl/articles/177227.pdf
Data publikacji:
2019
Wydawca:
Polska Akademia Nauk. Czasopisma i Monografie PAN
Tematy:
emotion recognition
speech signal analysis
voice analysis
fundamental frequency
speech corpora
Opis:
The human voice is one of the basic means of communication, thanks to which one also can easily convey the emotional state. This paper presents experiments on emotion recognition in human speech based on the fundamental frequency. AGH Emotional Speech Corpus was used. This database consists of audio samples of seven emotions acted by 12 different speakers (6 female and 6 male). We explored phrases of all the emotions – all together and in various combinations. Fast Fourier Transformation and magnitude spectrum analysis were applied to extract the fundamental tone out of the speech audio samples. After extraction of several statistical features of the fundamental frequency, we studied if they carry information on the emotional state of the speaker applying different AI methods. Analysis of the outcome data was conducted with classifiers: K-Nearest Neighbours with local induction, Random Forest, Bagging, JRip, and Random Subspace Method from algorithms collection for data mining WEKA. The results prove that the fundamental frequency is a prospective choice for further experiments.
Źródło:
Archives of Acoustics; 2019, 44, 2; 277-286
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Facial emotion recognition using average face ratios and fuzzy hamming distance
Autorzy:
Ounachad, Khalid
Oualla, Mohamed
Sadiq, Abdelalim
Souhar, Abdelghani
Powiązania:
https://bibliotekanauki.pl/articles/2141894.pdf
Data publikacji:
2020
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
average face ratios
facial emotion recognition
fuzzy hamming distance
perfect face ratios
Opis:
Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Nowadays, emotional factors are important as classic functional aspects of customer purchasing behavior. Purchasing choices and decisions making are the result of a careful analysis of the product advantages and disadvantages and of affective and emotional aspects. This paper presents a novel method for human emotion classification and recognition. We generate seven referential faces suitable for each kind of facial emotion based on perfect face ratios and some classical averages. The basic idea is to extract perfect face ratios for emotional face and for each referential face as features and calculate the distance between them by using fuzzy hamming distance. To extract perfect face ratios, we use the point landmarks in the face then sixteen features will be extract. An experimental evaluation demonstrates the satisfactory performance of our approach on WSEFEP dataset. It can be applied with any existing facial emotion dataset. The proposed algorithm will be a competitor of the other proposed relative approaches. The recognition rate reaches more than 90%.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2020, 14, 4; 37-44
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Zastosowanie multimodalnej klasyfikacji w rozpoznawaniu stanów emocjonalnych na podstawie mowy spontanicznej
Spontaneus emotion redognition from speech signal using multimodal classification
Autorzy:
Kamińska, D.
Pelikant, A.
Powiązania:
https://bibliotekanauki.pl/articles/408014.pdf
Data publikacji:
2012
Wydawca:
Politechnika Lubelska. Wydawnictwo Politechniki Lubelskiej
Tematy:
rozpoznawanie emocji
sygnał mowy
algorytm kNN
emotion recognition
speech signal
k-NN algorithm
Opis:
Artykuł prezentuje zagadnienie związane z rozpoznawaniem stanów emocjonalnych na podstawie analizy sygnału mowy. Na potrzeby badań stworzona została polska baza mowy spontanicznej, zawierająca wypowiedzi kilkudziesięciu osób, w różnym wieku i różnej płci. Na podstawie analizy sygnału mowy stworzono przestrzeń cech. Klasyfikację stanowi multimodalny mechanizm rozpoznawania, oparty na algorytmie kNN. Średnia poprawność: rozpoznawania wynosi 83%.
The article presents the issue of emotion recognition from a speech signal. For this study, a Polish spontaneous database, containing speech from people of different age and gender, was created. Features were determined from the speech signal. The process of recognition was based on multimodal classification, related to kNN algorithm. The average of accuracy performance was up to 83%.
Źródło:
Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska; 2012, 3; 36-39
2083-0157
2391-6761
Pojawia się w:
Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Pomiary parametrów akustycznych mowy emocjonalnej - krok ku modelowaniu wokalnej ekspresji emocji
Measurements of emotional speech acoustic parameters - a step towards vocal emotion expression modelling
Autorzy:
Igras, M.
Wszołek, W.
Powiązania:
https://bibliotekanauki.pl/articles/154905.pdf
Data publikacji:
2012
Wydawca:
Stowarzyszenie Inżynierów i Techników Mechaników Polskich
Tematy:
rozpoznawanie emocji
wokalne korelaty emocji
przetwarzanie sygnału mowy
emotion recognition
vocal correlates of emotions
Opis:
Niniejsza praca podejmuje próbę pomiaru cech sygnału mowy skorelownych z jego zawartością emocjonalną (na przykładzie emocji podstawowych). Zaprezentowano korpus mowy zaprojektowany tak, by umożliwić różnicową analizę niezależną od mówcy i treści oraz przeprowadzono testy mające na celu ocenę jego przydatności do automatyzacji wykrywania emocji w mowie. Zaproponowano robocze profile wokalne emocji. Artykuł prezentuje również propozycje aplikacji medycznych opartych na pomiarach emocji w głosie.
The paper presents an approach to creating new measures of emotional content of speech signals. The results of this project constitute the basis or further research in this field. For analysis of differences of the basic emotional states independently of a speaker and semantic content, a corpus of acted emotional speech was designed and recorded. The alternative methods for emotional speech signal acquisition are presented and discussed (Section 2). Preliminary tests were performed to evaluate the corpus applicability to automatic emotion recognition. On the stage of recording labeling, human perceptual tests were applied (using recordings with and without semantic content). The results are presented in the form of the confusion table (Tabs. 1 and 2). The further signal processing: parametrisation and feature extraction techniques (Section 3) allowed extracting a set of features characteristic for each emotion, and led to developing preliminary vocal emotion profiles (sets of acoustic features characteristic for each of basic emotions) - an example is presented in Tab. 3. Using selected feature vectors, the methods for automatic classification (k nearest neighbours and self organizing neural network) were tested. Section 4 contains the conclusions: analysis of variables associated with vocal expression of emotions and challenges in further development. The paper also discusses use of the results of this kind of research for medical applications (Section 5).
Źródło:
Pomiary Automatyka Kontrola; 2012, R. 58, nr 4, 4; 335-338
0032-4140
Pojawia się w:
Pomiary Automatyka Kontrola
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Speech emotion recognition based on sparse representation
Autorzy:
Yan, J.
Wang, X.
Gu, W.
Ma, L.
Powiązania:
https://bibliotekanauki.pl/articles/177778.pdf
Data publikacji:
2013
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
speech emotion recognition
sparse partial least squares regression SPLSR
SPLSR
feature selection and dimensionality reduction
Opis:
Speech emotion recognition is deemed to be a meaningful and intractable issue among a number of do- mains comprising sentiment analysis, computer science, pedagogy, and so on. In this study, we investigate speech emotion recognition based on sparse partial least squares regression (SPLSR) approach in depth. We make use of the sparse partial least squares regression method to implement the feature selection and dimensionality reduction on the whole acquired speech emotion features. By the means of exploiting the SPLSR method, the component parts of those redundant and meaningless speech emotion features are lessened to zero while those serviceable and informative speech emotion features are maintained and selected to the following classification step. A number of tests on Berlin database reveal that the recogni- tion rate of the SPLSR method can reach up to 79.23% and is superior to other compared dimensionality reduction methods.
Źródło:
Archives of Acoustics; 2013, 38, 4; 465-470
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Rozumienie mowy jako moderator związku rozpoznawania emocji z nasileniem symptomów zaburzeń ze spektrum autyzmu (ASD)
Autorzy:
Krzysztofik, Karolina
Powiązania:
https://bibliotekanauki.pl/articles/2054372.pdf
Data publikacji:
2021
Wydawca:
Uniwersytet Marii Curie-Skłodowskiej. Wydawnictwo Uniwersytetu Marii Curie-Skłodowskiej
Tematy:
autism spectrum disorder
ASD
emotion recognition
speech comprehension
zaburzenia ze spektrum autyzmu
rozpoznawanie emocji
rozumienie mowy
Opis:
Współcześni badacze podkreślają konsekwencje trudności osób z zaburzeniami ze spektrum autyzmu (Autism Spectrum Disorder, ASD) w rozpoznawaniu emocji dla nasilenia symptomów tego zaburzenia. Jednocześnie wiele z osób z ASD potrafi rozpoznawać emocje innych osób dzięki strategiom kompensacyjnym opartym na relatywnie dobrze rozwiniętych kompetencjach poznawczych i językowych. Wydaje się zatem, że umiejętności językowe osób z ASD mogą moderować związek rozpoznawania emocji z nasileniem symptomów ASD. Celem prezentowanych badań było ustalenie, czy poziom rozumienia mowy osób z ASD moderuje związek rozpoznawania emocji z nasileniem symptomów ASD. Przebadano grupę 63 dzieci z ASD w wieku od 3 lat i 7 miesięcy do 9 lat i 3 miesiący, wykorzystując następujące narzędzia: Skalę Nasilenia Symptomów ASD, podskalę Rozpoznawanie Emocji ze Skali Mechanizmu Teorii Umysłu oraz podskalę Rozumienie Mowy ze skali Iloraz Inteligencji i Rozwoju dla Dzieci w Wieku Przedszkolnym (IDS-P). Uzyskane wyniki wskazują, że poziom rozumienia mowy moderuje związek poziomu rozwoju rozpoznawania emocji z nasileniem symptomów ASD w zakresie deficytów w komunikowaniu i interakcjach. Wyniki te znajdują swoje implikacje dla włączenia terapii rozumienia mowy w proces rehabilitacji osób z ASD, a także dla teoretycznej refleksji nad uwarunkowaniami nasilenia symptomów ASD.
Contemporary researchers underline consequences of difficulties in emotion recognition experienced by persons with autism spectrum disorder (ASD) for severity of symptoms of this disorder. Individuals with ASD, when trying to recognize the emotional states of others, often use compensatory strategies based on relatively well-developed cognitive and linguistic competences. Thus, the relationship between the recognition of emotions and the severity of ASD symptoms may be moderated by linguistic competencies. Own research was aimed at determining if the level of speech comprehension moderates the relationship between emotion recognition and ASD symptom severity. Participants were 63 children with ASD aged from 3 years and 7 months to 9 years and 3 months. The following tools were used: ASD Symptom Severity Scale, the Emotion Recognition subscale of the Theory of Mind Scale and the Speech Comprehension subscale from the Intelligence and Development Scales – Preschool (IDS-P). The results indicate that the level of speech comprehension moderates the relationship between the level of emotion recognition and ASD symptom severity in the range of deficits in communication and interaction. These results have implications for integrating speech comprehension therapy into the process of the rehabilitation of individuals with ASD, as well as for theoretical reflection concerning the determinants of ASD symptom severity.
Źródło:
Annales Universitatis Mariae Curie-Skłodowska, sectio J – Paedagogia-Psychologia; 2021, 34, 3; 199-219
0867-2040
Pojawia się w:
Annales Universitatis Mariae Curie-Skłodowska, sectio J – Paedagogia-Psychologia
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Using BCI and EEG to process and analyze driver’s brain activity signals during VR simulation
Autorzy:
Nader, Mirosław
Jacyna-Gołda, Ilona
Nader, Stanisław
Nehring, Karol
Powiązania:
https://bibliotekanauki.pl/articles/2067410.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
signal processing
EEG
BCI
emotion recognition
driver
virtual reality
przetwarzanie sygnałów
rozpoznawanie emocji
kierowca
symulacja wirtualna
Opis:
The use of popular brain–computer interfaces (BCI) to analyze signals and the behavior of brain activity is a very current problem that is often undertaken in various aspects by many researchers. This comparison turns out to be particularly useful when studying the flows of information and signals in the human-machine-environment system, especially in the field of transportation sciences. This article presents the results of a pilot study of driver behavior with the use of a pro-prietary simulator based on Virtual Reality technology. The study uses the technology of studying signals emitted by the human mind and its specific zones in response to given environmental factors. A solution based on virtual reality with the limitation of external stimuli emitted by the real world was proposed, and computational analysis of the obtained data was performed. The research focused on traffic situations and how they affect the subject. The test was attended by representatives of various age groups, both with and without a driving license. This study presents an original functional model of a research stand in VR technology that we designed and built. Testing in VR conditions allows to limit the influence of undesirable external stimuli that may distort the results of readings. At the same time, it increases the range of road events that can be simulated without generating any risk for the participant. In the presented studies, the BCI was used to assess the driver's behavior, which allows for the activity of selected brain waves of the examined person to be registered. Electro-encephalogram (EEG) was used to study the activity of brain and its response to stimuli coming from the Virtual Reality created environment. Electrical activity detection is possible thanks to the use of electrodes placed on the skin in selected areas of the skull. The structure of the proprietary test-stand for signal and information flow simulation tests, which allows for the selection of measured signals and the method of parameter recording, is presented. An important part of this study is the presentation of the results of pilot studies obtained in the course of real research on the behavior of a car driver.
Źródło:
Archives of Transport; 2021, 60, 4; 137--153
0866-9546
2300-8830
Pojawia się w:
Archives of Transport
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Automatic speech based emotion recognition using paralinguistics features
Autorzy:
Hook, J.
Noroozi, F.
Toygar, O.
Anbarjafari, G.
Powiązania:
https://bibliotekanauki.pl/articles/200261.pdf
Data publikacji:
2019
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
random forests
speech emotion recognition
machine learning
support vector machines
lasy
rozpoznawanie emocji mowy
nauczanie maszynowe
Opis:
Affective computing studies and develops systems capable of detecting humans affects. The search for universal well-performing features for speech-based emotion recognition is ongoing. In this paper, a?small set of features with support vector machines as the classifier is evaluated on Surrey Audio-Visual Expressed Emotion database, Berlin Database of Emotional Speech, Polish Emotional Speech database and Serbian emotional speech database. It is shown that a?set of 87 features can offer results on-par with state-of-the-art, yielding 80.21, 88.6, 75.42 and 93.41% average emotion recognition rate, respectively. In addition, an experiment is conducted to explore the significance of gender in emotion recognition using random forests. Two models, trained on the first and second database, respectively, and four speakers were used to determine the effects. It is seen that the feature set used in this work performs well for both male and female speakers, yielding approximately 27% average emotion recognition in both models. In addition, the emotions for female speakers were recognized 18% of the time in the first model and 29% in the second. A?similar effect is seen with male speakers: the first model yields 36%, the second 28% a?verage emotion recognition rate. This illustrates the relationship between the constitution of training data and emotion recognition accuracy.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2019, 67, 3; 479-488
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Multi-model hybrid ensemble weighted adaptive approach with decision level fusion for personalized affect recognition based on visual cues
Autorzy:
Jadhav, Nagesh
Sugandhi, Rekha
Powiązania:
https://bibliotekanauki.pl/articles/2086876.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
deep learning
convolution neural network
emotion recognition
transfer learning
late fusion
uczenie głębokie
konwolucyjna sieć neuronowa
rozpoznawanie emocji
Opis:
In the domain of affective computing different emotional expressions play an important role. To convey the emotional state of human emotions, facial expressions or visual cues are used as an important and primary cue. The facial expressions convey humans affective state more convincingly than any other cues. With the advancement in the deep learning techniques, the convolutional neural network (CNN) can be used to automatically extract the features from the visual cues; however variable sized and biased datasets are a vital challenge to be dealt with as far as implementation of deep models is concerned. Also, the dataset used for training the model plays a significant role in the retrieved results. In this paper, we have proposed a multi-model hybrid ensemble weighted adaptive approach with decision level fusion for personalized affect recognition based on the visual cues. We have used a CNN and pre-trained ResNet-50 model for the transfer learning. VGGFace model’s weights are used to initialize weights of ResNet50 for fine-tuning the model. The proposed system shows significant improvement in test accuracy in affective state recognition compared to the singleton CNN model developed from scratch or transfer learned model. The proposed methodology is validated on The Karolinska Directed Emotional Faces (KDEF) dataset with 77.85% accuracy. The obtained results are promising compared to the existing state of the art methods.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2021, 69, 6; e138819, 1--11
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Rozpoznawanie emocji w tekstach polskojęzycznych z wykorzystaniem metody słów kluczowych
Emotion recognition in polish texts based on keywords detection method
Autorzy:
Nowaczyk, A.
Jackowska-Strumiłło, L.
Powiązania:
https://bibliotekanauki.pl/articles/408760.pdf
Data publikacji:
2017
Wydawca:
Politechnika Lubelska. Wydawnictwo Politechniki Lubelskiej
Tematy:
rozpoznawanie emocji
interakcja człowiek-komputer
przetwarzanie języka naturalnego
przetwarzanie tekstów
emotion recognition
human-computer interaction
natural language processing
text processing
Opis:
Dynamiczny rozwój sieci społecznościowych sprawił, że Internet stał się najpopularniejszym medium komunikacyjnym. Zdecydowana większość komunikatów wymieniana jest w postaci widomości tekstowych, które niejednokrotnie odzwierciedlają stan emocjonalny autora. Identyfikacja emocji w tekstach znajduje szerokie zastosowanie w handlu elektronicznym, czy telemedycynie, stając się jednocześnie ważnym elementem w komunikacji. człowiek-komputer. W niniejszym artykule zaprezentowano metodę rozpoznawania emocji w tekstach polskojęzycznych opartą o algorytm detekcji słów kluczowych i lematyzację. Uzyskano dokładność rzędu 60%. Opracowano również pierwszą polskojęzyczną bazę słów kluczowych wyrażających emocje.
Dynamic development of social networks caused that the Internet has become the most popular communication medium. A vast majority of the messages are exchanged in text format and very often reflect authors’ emotional states. Detection of the emotions in text is widely used in e-commerce or telemedicine becoming the milestone in the field of human-computer interaction. The paper presents a method of emotion recognition in Polish-language texts based on the keywords detection algorithm with lemmatization. The obtained accuracy is about 60%. The first Polish-language database of keywords expressing emotions has been also developed.
Źródło:
Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska; 2017, 7, 2; 102-105
2083-0157
2391-6761
Pojawia się w:
Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Rozpoznawanie i pomiar emocji w badaniach doświadczeń klienta
Recognition and Measurement of Emotions in Customer Experience Research
Autorzy:
Budzanowska-Drzewiecka, Małgorzata
Lubowiecki-Vikuk, Adrian
Powiązania:
https://bibliotekanauki.pl/articles/27839578.pdf
Data publikacji:
2023
Wydawca:
Wydawnictwo Uniwersytetu Ekonomicznego we Wrocławiu
Tematy:
doświadczenie klienta
rozpoznawanie emocji
automatyczna analiza ekspresji twarzy
FaceReader
pomiar emocji
customer experience
emotion recognition
automatic facial expression analysis
measuring emotions
Opis:
Badanie doświadczeń klienta wymaga rozwijania metodyki ich pomiaru pozwalającej na uwzględnienie ich złożoności. Jedną z ważnych składowych doświadczeń są emocje, których rozpoznawanie i pomiar stanowi wciąż wyzwanie dla badaczy. Celem artykułu jest dyskusja na temat metod i technik wykorzystywanych do rozpoznawania i pomiaru emocji w badaniach doświadczeń klienta. Szczególną uwagę poświęcono wykorzystaniu technik wywodzących się z neuronauki konsumenckiej, w tym dylematom związanym z sięganiem po automatyczną analizę ekspresji mimicznej. Studia literaturowe pozwoliły na dyskusję dotyczącą korzyści i ograniczeń stosowania automatycznej analizy ekspresji mimicznej w pomiarze doświadczeń klientów. Mimo ograniczeń, mogą one być traktowane jako atrakcyjne uzupełnienie metod i technik pozwalających na uchwycenie emocjonalnych komponentów doświadczenia klienta na różnych etapach (przed zakupem, w jego czasie i po nim).
The study of customer experience requires the development of methodologies which measure such experience and account for its complexity. One important component of customer experience is emotion, the recognition and measurement of which is still a challenge for researchers. The purpose of this article is to discuss methods and techniques used to recognise and measure emotions in customer experience research. Particular attention is paid to the use of techniques derived from consumer neuroscience, including the dilemmas associated with reaching for automatic analysis of facial expressions. The literature review is indicative of the ongoing discussion on the benefits and limitations of using the automatic analysis of facial expressions technique in measuring customer experience. Despite its limitations, such a technique can be an attractive complement to methods and techniques used to capture the emotional components of customer experience at different stages (before, during, and after purchase).
Źródło:
Prace Naukowe Uniwersytetu Ekonomicznego we Wrocławiu; 2023, 67, 5; 67-77
1899-3192
Pojawia się w:
Prace Naukowe Uniwersytetu Ekonomicznego we Wrocławiu
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Comparison of speaker dependent and speaker independent emotion recognition
Autorzy:
Rybka, J.
Janicki, A.
Powiązania:
https://bibliotekanauki.pl/articles/330055.pdf
Data publikacji:
2013
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
speech processing
emotion recognition
EMO-DB
support vector machines
artificial neural network
przetwarzanie mowy
rozpoznawanie emocji
maszyna wektorów wspierających
sztuczna sieć neuronowa
Opis:
This paper describes a study of emotion recognition based on speech analysis. The introduction to the theory contains a review of emotion inventories used in various studies of emotion recognition as well as the speech corpora applied, methods of speech parametrization, and the most commonly employed classification algorithms. In the current study the EMO-DB speech corpus and three selected classifiers, the k-Nearest Neighbor (k-NN), the Artificial Neural Network (ANN) and Support Vector Machines (SVMs), were used in experiments. SVMs turned out to provide the best classification accuracy of 75.44% in the speaker dependent mode, that is, when speech samples from the same speaker were included in the training corpus. Various speaker dependent and speaker independent configurations were analyzed and compared. Emotion recognition in speaker dependent conditions usually yielded higher accuracy results than a similar but speaker independent configuration. The improvement was especially well observed if the base recognition ratio of a given speaker was low. Happiness and anger, as well as boredom and neutrality, proved to be the pairs of emotions most often confused.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2013, 23, 4; 797-808
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Music Playlist Generation using Facial Expression Analysis and Task Extraction
Autorzy:
Sen, A.
Popat, D.
Shah, H.
Kuwor, P.
Johri, E.
Powiązania:
https://bibliotekanauki.pl/articles/908868.pdf
Data publikacji:
2016
Wydawca:
Uniwersytet Marii Curie-Skłodowskiej. Wydawnictwo Uniwersytetu Marii Curie-Skłodowskiej
Tematy:
facial expression analysis
emotion recognition
feature extraction
viola jones face detection
gabor filter
adaboost
k-NN algorithm
task extraction
music classification
playlist generation
Opis:
In day to day stressful environment of IT Industry, there is a truancy for the appropriate relaxation time for all working professionals. To keep a person stress free, various technical or non-technical stress releasing methods are now being adopted. We can categorize the people working on computers as administrators, programmers, etc. each of whom require varied ways in order to ease themselves. The work pressure and the vexation of any kind for a person can be depicted by their emotions. Facial expressions are the key to analyze the current psychology of the person. In this paper, we discuss a user intuitive smart music player. This player will capture the facial expressions of a person working on the computer and identify the current emotion. Intuitively the music will be played for the user to relax them. The music player will take into account the foreground processes which the person is executing on the computer. Since various sort of music is available to boost one's enthusiasm, taking into consideration the tasks executed on the system by the user and the current emotions they carry, an ideal playlist of songs will be created and played for the person. The person can browse the playlist and modify it to make the system more flexible. This music player will thus allow the working professionals to stay relaxed in spite of their workloads.
Źródło:
Annales Universitatis Mariae Curie-Skłodowska. Sectio AI, Informatica; 2016, 16, 2; 1-6
1732-1360
2083-3628
Pojawia się w:
Annales Universitatis Mariae Curie-Skłodowska. Sectio AI, Informatica
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Między słowem a gestem: konsekwencje stosowania technologii wspomagających czytanie i pisanie
Using Technology to Support Reading and Writing: Consequences and Implications
Autorzy:
Augustyn, Kamila
Powiązania:
https://bibliotekanauki.pl/articles/14541481.pdf
Data publikacji:
2023-01-30
Wydawca:
Uniwersytet Warszawski. Wydział Dziennikarstwa, Informacji i Bibliologii
Tematy:
autosugestie
pisanie gestyczne
współedytowanie
VSTF
urządzenia cyfrowe
new media literacy
rozpoznawanie emocji
predictive text suggestions
gesture writing
collaborative writing
mobile devices
emotion recognition
Opis:
I summarize findings from studies conducted over the last 20 years regarding writing and reading on digital devices. In this literature review, the aim is to explore the effects on human cognitive ability of such functions as predictive text suggestions, gesture writing, collaborative writing, and visual-syntactic text formatting (VSTF). I also consider how writing patterns can be used. Studies have shown that auto-suggestions can significantly change the final message and make it less original. The way in which the content is displayed has a huge impact on how it is perceived. VSTF promotes careful reading, improves memory, and facilitates text analysis in older students. Comparing handwriting to writing using digital devices has demonstrated the importance of visual aspects for the recognition and copying of letters. Handwritten notes improve memory and stimulate deeper levels of cognitive function. The use of VSTF and co-editing documents can be most beneficial to low-level language learners. A more in-depth analysis is needed of emotions' impact on forms of collaboration as well as the efficiency and multimodality of text input on comprehension.
Źródło:
Z Badań nad Książką i Księgozbiorami Historycznymi; 2022, 16, 4; 587-618
1897-0788
2544-8730
Pojawia się w:
Z Badań nad Książką i Księgozbiorami Historycznymi
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Когнитивно-культурные, индивидуально-психологические и возрастные особенности способности к распознаванию эмоций
Kulturoznawcze, indywidualne psychologiczne i związane z wiekiem cechy zdolności rozpoznawania emocji
Cognitive-cultural, individual-psychological and age particularities of the ability to recognize emotions
Autorzy:
Hvorova, Ekaterina
Powiązania:
https://bibliotekanauki.pl/articles/1388051.pdf
Data publikacji:
2016-03-31
Wydawca:
Uniwersytet Gdański. Wydawnictwo Uniwersytetu Gdańskiego
Tematy:
emotion recognition
development of emotional intelligence components
age particularities of the ability to recognize emotions
cognitive and cultural particularities of the ability to recognize emotions
Opis:
This article describes the features of the development of the emotional sphere. It emphasizes the importance of the primary school age in the development of certain components of emotional intelligence, one of which is the ability to recognize emotions. In the early school years, children are able to understand emotions, but mostly with the help of their own emotional experience and/ or according to the situations they are used to experiencing, they mostly rely on the context of the situation, and, as we know, it does not always work correctly: different people in the same situations may experience completely different emotions. Few children are able to establish the reasons that caused other people emotions. Besides, one of the components of emotional intelligence is the ability to control one’s own emotions. Emotion regulation becomes available for children after the socialization associated with the first years at school. Child development is partly determined by the process of socialization, which determines specific cognitive representations of emotions, so called emotional prototypes. Also the culture in which the child grows up has effects on the process of emotion recognition and expression, so, for example, in the individualistic culture emotional expression and recognition is encouraged, and in collectivist cultures, there are certain rules of emotional expression fixing in which situations and to what extent the expression of emotions is permissible.
Źródło:
Problemy Wczesnej Edukacji; 2016, 32, 1; 126-129
1734-1582
2451-2230
Pojawia się w:
Problemy Wczesnej Edukacji
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Music Mood Visualization Using Self-Organizing Maps
Autorzy:
Plewa, M.
Kostek, B.
Powiązania:
https://bibliotekanauki.pl/articles/176410.pdf
Data publikacji:
2015
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
music mood
music parameterization
MER (Music Emotion Recognition)
MIR (Music Information Retrieval)
Multidimensional Scaling (MDS)
principal component analysis (PCA)
Self-Organizing Maps (SOM)
ANN (Artificial Neural Networks)
Opis:
Due to an increasing amount of music being made available in digital form in the Internet, an automatic organization of music is sought. The paper presents an approach to graphical representation of mood of songs based on Self-Organizing Maps. Parameters describing mood of music are proposed and calculated and then analyzed employing correlation with mood dimensions based on the Multidimensional Scaling. A map is created in which music excerpts with similar mood are organized next to each other on the two-dimensional display.
Źródło:
Archives of Acoustics; 2015, 40, 4; 513-525
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł

Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies