Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "emotion recognition" wg kryterium: Temat


Tytuł:
The relationship between Trait Emotional Intelligence and emotion recognition in the context of COVID-19 pandemic
Autorzy:
Cannavò, Marco
Barberis, Nadia
Larcan, Rosalba
Cuzzocrea, Francesca
Powiązania:
https://bibliotekanauki.pl/articles/2121465.pdf
Data publikacji:
2022
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
COVID-19
Trait EI
emotion recognition
Opis:
Covid-19 pandemic is severely impacting worldwide. A line of research warned that facial occlusion may impair facial emotion recognition, whilst prior research highlighted the role of Trait Emotional Intelligence in the recognition of non-verbal social stimuli. The sample consisted of 102 emerging adults, aged 18-24 (M = 20.76; SD = 2.10; 84% females, 16% males) and were asked to recognize four different emotions (happiness, fear, anger, and sadness) in fully visible faces and in faces wearing a mask and to complete a questionnaire assessing Trait Emotional Intelligence. Results highlighted that individuals displayed lower accuracy in detecting happiness and fear in covered faces, while also being more inaccurate in reporting correct answers. The results show that subjects provide more correct answers when the photos show people without a mask than when they are wearing it. In addition, participants give more wrong answers when there are subjects wearing masks in the photos than when they are not wearing it. In addition, participants provide more correct answers regarding happiness and sadness when in the photos the subjects are not wearing the mask, compared to when they are wearing it. Implications are discussed.
Źródło:
Polish Psychological Bulletin; 2022, 53, 1; 15-22
0079-2993
Pojawia się w:
Polish Psychological Bulletin
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Analysis of Features and Classifiers in Emotion Recognition Systems : Case Study of Slavic Languages
Autorzy:
Nedeljković, Željko
Milošević, Milana
Ðurović, Željko
Powiązania:
https://bibliotekanauki.pl/articles/176678.pdf
Data publikacji:
2020
Wydawca:
Polska Akademia Nauk. Czasopisma i Monografie PAN
Tematy:
emotion recognition
speech processing
classification algorithms
Opis:
Today’s human-computer interaction systems have a broad variety of applications in which automatic human emotion recognition is of great interest. Literature contains many different, more or less successful forms of these systems. This work emerged as an attempt to clarify which speech features are the most informative, which classification structure is the most convenient for this type of tasks, and the degree to which the results are influenced by database size, quality and cultural characteristic of a language. The research is presented as the case study on Slavic languages.
Źródło:
Archives of Acoustics; 2020, 45, 1; 129-140
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
The Acoustic Cues of Fear : Investigation of Acoustic Parameters of Speech Containing Fear
Autorzy:
Özseven, T.
Powiązania:
https://bibliotekanauki.pl/articles/178133.pdf
Data publikacji:
2018
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
emotion recognition
acoustic analysis
fear
speech processing
Opis:
Speech emotion recognition is an important part of human-machine interaction studies. The acoustic analysis method is used for emotion recognition through speech. An emotion does not cause changes on all acoustic parameters. Rather, the acoustic parameters affected by emotion vary depending on the emotion type. In this context, the emotion-based variability of acoustic parameters is still a current field of study. The purpose of this study is to investigate the acoustic parameters that fear affects and the extent of their influence. For this purpose, various acoustic parameters were obtained from speech records containing fear and neutral emotions. The change according to the emotional states of these parameters was analyzed using statistical methods, and the parameters and the degree of influence that the fear emotion affected were determined. According to the results obtained, the majority of acoustic parameters that fear affects vary according to the used data. However, it has been demonstrated that formant frequencies, mel-frequency cepstral coefficients, and jitter parameters can define the fear emotion independent of the data used.
Źródło:
Archives of Acoustics; 2018, 43, 2; 245-251
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Communication atmosphere in humans and robots interaction based on the concept of fuzzy atmosfield generated by emotional states of humans and robots
Autorzy:
Liu, Z. T.
Chen, L. F.
Dong, F. Y.
Hirota, K.
Min, W.
Li, D. Y.
Yamazaki, Y.
Powiązania:
https://bibliotekanauki.pl/articles/384920.pdf
Data publikacji:
2013
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
human-robot interaction
communication atmosphere
fuzzy logic
emotion recognition
Opis:
Communication atmosphere based on emotional states of humans and robots is modeled by using Fuzzy Atmosfield (FA), where the human emotion is estimated from bimodal communication cues (i.e., speech and gesture) using weighted fusion and fuzzy logic, and the robot emotion is generated by emotional expression synthesis. It makes possible to quantitatively express overall affective expression of individuals, and helps to facilitate smooth communication in humans-robots interaction. Experiments in a household environment are performed by four humans and five eye robots, where emotion recognition of humans based on bimodal cues achieves 84% accuracy in average, improved by about 10% compared to that using only speech. Experimental results from the model of communication atmosphere based on the FA are evaluated by comparing with questionnaire surveys, from which the maximum error of 0.25 and the minimum correlation coefficient of 0.72 for three axes in the FA confirm the validity of the proposal. In ongoing work, an atmosphere representation system is being planned for casual communication between humans and robots, taking into account multiple emotional modalities such as speech, gesture, and music.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2013, 7, 2; 52-63
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Emotion monitoring – verification of physiological characteristics measurement procedures
Autorzy:
Landowska, A.
Powiązania:
https://bibliotekanauki.pl/articles/220577.pdf
Data publikacji:
2014
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
affective computing
emotion recognition
physiology
motion artifacts
sensor location
Opis:
This paper concerns measurement procedures on an emotion monitoring stand designed for tracking human emotions in the Human-Computer Interaction with physiological characteristics. The paper addresses the key problem of physiological measurements being disturbed by a motion typical for human-computer interaction such as keyboard typing or mouse movements. An original experiment is described, that aimed at practical evaluation of measurement procedures performed at the emotion monitoring stand constructed at GUT. Different locations of sensors were considered and evaluated for suitability and measurement precision in the Human- Computer Interaction monitoring. Alternative locations (ear lobes and forearms) for skin conductance, blood volume pulse and temperature sensors were proposed and verified. Alternative locations proved correlation with traditional locations as well as lower sensitiveness to movements like typing or mouse moving, therefore they can make a better solution for monitoring the Human-Computer Interaction.
Źródło:
Metrology and Measurement Systems; 2014, 21, 4; 719-732
0860-8229
Pojawia się w:
Metrology and Measurement Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Acoustic Methods in Identifying Symptoms of Emotional States
Autorzy:
Piątek, Zuzanna
Kłaczyński, Maciej
Powiązania:
https://bibliotekanauki.pl/articles/1953482.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czasopisma i Monografie PAN
Tematy:
emotion recognition
speech signal processing
clustering analysis
Sammon mapping
Opis:
The study investigates the use of speech signal to recognise speakers’ emotional states. The introduction includes the definition and categorization of emotions, including facial expressions, speech and physiological signals. For the purpose of this work, a proprietary resource of emotionally-marked speech recordings was created. The collected recordings come from the media, including live journalistic broadcasts, which show spontaneous emotional reactions to real-time stimuli. For the purpose of signal speech analysis, a specific script was written in Python. Its algorithm includes the parameterization of speech recordings and determination of features correlated with emotional content in speech. After the parametrization process, data clustering was performed to allows for the grouping of feature vectors for speakers into greater collections which imitate specific emotional states. Using the t-Student test for dependent samples, some descriptors were distinguished, which identified significant differences in the values of features between emotional states. Some potential applications for this research were proposed, as well as other development directions for future studies of the topic.
Źródło:
Archives of Acoustics; 2021, 46, 2; 259-269
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Speech Emotion Recognition Based on Voice Fundamental Frequency
Autorzy:
Dimitrova-Grekow, Teodora
Klis, Aneta
Igras-Cybulska, Magdalena
Powiązania:
https://bibliotekanauki.pl/articles/177227.pdf
Data publikacji:
2019
Wydawca:
Polska Akademia Nauk. Czasopisma i Monografie PAN
Tematy:
emotion recognition
speech signal analysis
voice analysis
fundamental frequency
speech corpora
Opis:
The human voice is one of the basic means of communication, thanks to which one also can easily convey the emotional state. This paper presents experiments on emotion recognition in human speech based on the fundamental frequency. AGH Emotional Speech Corpus was used. This database consists of audio samples of seven emotions acted by 12 different speakers (6 female and 6 male). We explored phrases of all the emotions – all together and in various combinations. Fast Fourier Transformation and magnitude spectrum analysis were applied to extract the fundamental tone out of the speech audio samples. After extraction of several statistical features of the fundamental frequency, we studied if they carry information on the emotional state of the speaker applying different AI methods. Analysis of the outcome data was conducted with classifiers: K-Nearest Neighbours with local induction, Random Forest, Bagging, JRip, and Random Subspace Method from algorithms collection for data mining WEKA. The results prove that the fundamental frequency is a prospective choice for further experiments.
Źródło:
Archives of Acoustics; 2019, 44, 2; 277-286
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Zastosowanie multimodalnej klasyfikacji w rozpoznawaniu stanów emocjonalnych na podstawie mowy spontanicznej
Spontaneus emotion redognition from speech signal using multimodal classification
Autorzy:
Kamińska, D.
Pelikant, A.
Powiązania:
https://bibliotekanauki.pl/articles/408014.pdf
Data publikacji:
2012
Wydawca:
Politechnika Lubelska. Wydawnictwo Politechniki Lubelskiej
Tematy:
rozpoznawanie emocji
sygnał mowy
algorytm kNN
emotion recognition
speech signal
k-NN algorithm
Opis:
Artykuł prezentuje zagadnienie związane z rozpoznawaniem stanów emocjonalnych na podstawie analizy sygnału mowy. Na potrzeby badań stworzona została polska baza mowy spontanicznej, zawierająca wypowiedzi kilkudziesięciu osób, w różnym wieku i różnej płci. Na podstawie analizy sygnału mowy stworzono przestrzeń cech. Klasyfikację stanowi multimodalny mechanizm rozpoznawania, oparty na algorytmie kNN. Średnia poprawność: rozpoznawania wynosi 83%.
The article presents the issue of emotion recognition from a speech signal. For this study, a Polish spontaneous database, containing speech from people of different age and gender, was created. Features were determined from the speech signal. The process of recognition was based on multimodal classification, related to kNN algorithm. The average of accuracy performance was up to 83%.
Źródło:
Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska; 2012, 3; 36-39
2083-0157
2391-6761
Pojawia się w:
Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Pomiary parametrów akustycznych mowy emocjonalnej - krok ku modelowaniu wokalnej ekspresji emocji
Measurements of emotional speech acoustic parameters - a step towards vocal emotion expression modelling
Autorzy:
Igras, M.
Wszołek, W.
Powiązania:
https://bibliotekanauki.pl/articles/154905.pdf
Data publikacji:
2012
Wydawca:
Stowarzyszenie Inżynierów i Techników Mechaników Polskich
Tematy:
rozpoznawanie emocji
wokalne korelaty emocji
przetwarzanie sygnału mowy
emotion recognition
vocal correlates of emotions
Opis:
Niniejsza praca podejmuje próbę pomiaru cech sygnału mowy skorelownych z jego zawartością emocjonalną (na przykładzie emocji podstawowych). Zaprezentowano korpus mowy zaprojektowany tak, by umożliwić różnicową analizę niezależną od mówcy i treści oraz przeprowadzono testy mające na celu ocenę jego przydatności do automatyzacji wykrywania emocji w mowie. Zaproponowano robocze profile wokalne emocji. Artykuł prezentuje również propozycje aplikacji medycznych opartych na pomiarach emocji w głosie.
The paper presents an approach to creating new measures of emotional content of speech signals. The results of this project constitute the basis or further research in this field. For analysis of differences of the basic emotional states independently of a speaker and semantic content, a corpus of acted emotional speech was designed and recorded. The alternative methods for emotional speech signal acquisition are presented and discussed (Section 2). Preliminary tests were performed to evaluate the corpus applicability to automatic emotion recognition. On the stage of recording labeling, human perceptual tests were applied (using recordings with and without semantic content). The results are presented in the form of the confusion table (Tabs. 1 and 2). The further signal processing: parametrisation and feature extraction techniques (Section 3) allowed extracting a set of features characteristic for each emotion, and led to developing preliminary vocal emotion profiles (sets of acoustic features characteristic for each of basic emotions) - an example is presented in Tab. 3. Using selected feature vectors, the methods for automatic classification (k nearest neighbours and self organizing neural network) were tested. Section 4 contains the conclusions: analysis of variables associated with vocal expression of emotions and challenges in further development. The paper also discusses use of the results of this kind of research for medical applications (Section 5).
Źródło:
Pomiary Automatyka Kontrola; 2012, R. 58, nr 4, 4; 335-338
0032-4140
Pojawia się w:
Pomiary Automatyka Kontrola
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Rozumienie mowy jako moderator związku rozpoznawania emocji z nasileniem symptomów zaburzeń ze spektrum autyzmu (ASD)
Autorzy:
Krzysztofik, Karolina
Powiązania:
https://bibliotekanauki.pl/articles/2054372.pdf
Data publikacji:
2021
Wydawca:
Uniwersytet Marii Curie-Skłodowskiej. Wydawnictwo Uniwersytetu Marii Curie-Skłodowskiej
Tematy:
autism spectrum disorder
ASD
emotion recognition
speech comprehension
zaburzenia ze spektrum autyzmu
rozpoznawanie emocji
rozumienie mowy
Opis:
Współcześni badacze podkreślają konsekwencje trudności osób z zaburzeniami ze spektrum autyzmu (Autism Spectrum Disorder, ASD) w rozpoznawaniu emocji dla nasilenia symptomów tego zaburzenia. Jednocześnie wiele z osób z ASD potrafi rozpoznawać emocje innych osób dzięki strategiom kompensacyjnym opartym na relatywnie dobrze rozwiniętych kompetencjach poznawczych i językowych. Wydaje się zatem, że umiejętności językowe osób z ASD mogą moderować związek rozpoznawania emocji z nasileniem symptomów ASD. Celem prezentowanych badań było ustalenie, czy poziom rozumienia mowy osób z ASD moderuje związek rozpoznawania emocji z nasileniem symptomów ASD. Przebadano grupę 63 dzieci z ASD w wieku od 3 lat i 7 miesięcy do 9 lat i 3 miesiący, wykorzystując następujące narzędzia: Skalę Nasilenia Symptomów ASD, podskalę Rozpoznawanie Emocji ze Skali Mechanizmu Teorii Umysłu oraz podskalę Rozumienie Mowy ze skali Iloraz Inteligencji i Rozwoju dla Dzieci w Wieku Przedszkolnym (IDS-P). Uzyskane wyniki wskazują, że poziom rozumienia mowy moderuje związek poziomu rozwoju rozpoznawania emocji z nasileniem symptomów ASD w zakresie deficytów w komunikowaniu i interakcjach. Wyniki te znajdują swoje implikacje dla włączenia terapii rozumienia mowy w proces rehabilitacji osób z ASD, a także dla teoretycznej refleksji nad uwarunkowaniami nasilenia symptomów ASD.
Contemporary researchers underline consequences of difficulties in emotion recognition experienced by persons with autism spectrum disorder (ASD) for severity of symptoms of this disorder. Individuals with ASD, when trying to recognize the emotional states of others, often use compensatory strategies based on relatively well-developed cognitive and linguistic competences. Thus, the relationship between the recognition of emotions and the severity of ASD symptoms may be moderated by linguistic competencies. Own research was aimed at determining if the level of speech comprehension moderates the relationship between emotion recognition and ASD symptom severity. Participants were 63 children with ASD aged from 3 years and 7 months to 9 years and 3 months. The following tools were used: ASD Symptom Severity Scale, the Emotion Recognition subscale of the Theory of Mind Scale and the Speech Comprehension subscale from the Intelligence and Development Scales – Preschool (IDS-P). The results indicate that the level of speech comprehension moderates the relationship between the level of emotion recognition and ASD symptom severity in the range of deficits in communication and interaction. These results have implications for integrating speech comprehension therapy into the process of the rehabilitation of individuals with ASD, as well as for theoretical reflection concerning the determinants of ASD symptom severity.
Źródło:
Annales Universitatis Mariae Curie-Skłodowska, sectio J – Paedagogia-Psychologia; 2021, 34, 3; 199-219
0867-2040
Pojawia się w:
Annales Universitatis Mariae Curie-Skłodowska, sectio J – Paedagogia-Psychologia
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Using BCI and EEG to process and analyze driver’s brain activity signals during VR simulation
Autorzy:
Nader, Mirosław
Jacyna-Gołda, Ilona
Nader, Stanisław
Nehring, Karol
Powiązania:
https://bibliotekanauki.pl/articles/2067410.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
signal processing
EEG
BCI
emotion recognition
driver
virtual reality
przetwarzanie sygnałów
rozpoznawanie emocji
kierowca
symulacja wirtualna
Opis:
The use of popular brain–computer interfaces (BCI) to analyze signals and the behavior of brain activity is a very current problem that is often undertaken in various aspects by many researchers. This comparison turns out to be particularly useful when studying the flows of information and signals in the human-machine-environment system, especially in the field of transportation sciences. This article presents the results of a pilot study of driver behavior with the use of a pro-prietary simulator based on Virtual Reality technology. The study uses the technology of studying signals emitted by the human mind and its specific zones in response to given environmental factors. A solution based on virtual reality with the limitation of external stimuli emitted by the real world was proposed, and computational analysis of the obtained data was performed. The research focused on traffic situations and how they affect the subject. The test was attended by representatives of various age groups, both with and without a driving license. This study presents an original functional model of a research stand in VR technology that we designed and built. Testing in VR conditions allows to limit the influence of undesirable external stimuli that may distort the results of readings. At the same time, it increases the range of road events that can be simulated without generating any risk for the participant. In the presented studies, the BCI was used to assess the driver's behavior, which allows for the activity of selected brain waves of the examined person to be registered. Electro-encephalogram (EEG) was used to study the activity of brain and its response to stimuli coming from the Virtual Reality created environment. Electrical activity detection is possible thanks to the use of electrodes placed on the skin in selected areas of the skull. The structure of the proprietary test-stand for signal and information flow simulation tests, which allows for the selection of measured signals and the method of parameter recording, is presented. An important part of this study is the presentation of the results of pilot studies obtained in the course of real research on the behavior of a car driver.
Źródło:
Archives of Transport; 2021, 60, 4; 137--153
0866-9546
2300-8830
Pojawia się w:
Archives of Transport
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Multi-model hybrid ensemble weighted adaptive approach with decision level fusion for personalized affect recognition based on visual cues
Autorzy:
Jadhav, Nagesh
Sugandhi, Rekha
Powiązania:
https://bibliotekanauki.pl/articles/2086876.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
deep learning
convolution neural network
emotion recognition
transfer learning
late fusion
uczenie głębokie
konwolucyjna sieć neuronowa
rozpoznawanie emocji
Opis:
In the domain of affective computing different emotional expressions play an important role. To convey the emotional state of human emotions, facial expressions or visual cues are used as an important and primary cue. The facial expressions convey humans affective state more convincingly than any other cues. With the advancement in the deep learning techniques, the convolutional neural network (CNN) can be used to automatically extract the features from the visual cues; however variable sized and biased datasets are a vital challenge to be dealt with as far as implementation of deep models is concerned. Also, the dataset used for training the model plays a significant role in the retrieved results. In this paper, we have proposed a multi-model hybrid ensemble weighted adaptive approach with decision level fusion for personalized affect recognition based on the visual cues. We have used a CNN and pre-trained ResNet-50 model for the transfer learning. VGGFace model’s weights are used to initialize weights of ResNet50 for fine-tuning the model. The proposed system shows significant improvement in test accuracy in affective state recognition compared to the singleton CNN model developed from scratch or transfer learned model. The proposed methodology is validated on The Karolinska Directed Emotional Faces (KDEF) dataset with 77.85% accuracy. The obtained results are promising compared to the existing state of the art methods.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2021, 69, 6; e138819, 1--11
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Rozpoznawanie emocji w tekstach polskojęzycznych z wykorzystaniem metody słów kluczowych
Emotion recognition in polish texts based on keywords detection method
Autorzy:
Nowaczyk, A.
Jackowska-Strumiłło, L.
Powiązania:
https://bibliotekanauki.pl/articles/408760.pdf
Data publikacji:
2017
Wydawca:
Politechnika Lubelska. Wydawnictwo Politechniki Lubelskiej
Tematy:
rozpoznawanie emocji
interakcja człowiek-komputer
przetwarzanie języka naturalnego
przetwarzanie tekstów
emotion recognition
human-computer interaction
natural language processing
text processing
Opis:
Dynamiczny rozwój sieci społecznościowych sprawił, że Internet stał się najpopularniejszym medium komunikacyjnym. Zdecydowana większość komunikatów wymieniana jest w postaci widomości tekstowych, które niejednokrotnie odzwierciedlają stan emocjonalny autora. Identyfikacja emocji w tekstach znajduje szerokie zastosowanie w handlu elektronicznym, czy telemedycynie, stając się jednocześnie ważnym elementem w komunikacji. człowiek-komputer. W niniejszym artykule zaprezentowano metodę rozpoznawania emocji w tekstach polskojęzycznych opartą o algorytm detekcji słów kluczowych i lematyzację. Uzyskano dokładność rzędu 60%. Opracowano również pierwszą polskojęzyczną bazę słów kluczowych wyrażających emocje.
Dynamic development of social networks caused that the Internet has become the most popular communication medium. A vast majority of the messages are exchanged in text format and very often reflect authors’ emotional states. Detection of the emotions in text is widely used in e-commerce or telemedicine becoming the milestone in the field of human-computer interaction. The paper presents a method of emotion recognition in Polish-language texts based on the keywords detection algorithm with lemmatization. The obtained accuracy is about 60%. The first Polish-language database of keywords expressing emotions has been also developed.
Źródło:
Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska; 2017, 7, 2; 102-105
2083-0157
2391-6761
Pojawia się w:
Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Rozpoznawanie i pomiar emocji w badaniach doświadczeń klienta
Recognition and Measurement of Emotions in Customer Experience Research
Autorzy:
Budzanowska-Drzewiecka, Małgorzata
Lubowiecki-Vikuk, Adrian
Powiązania:
https://bibliotekanauki.pl/articles/27839578.pdf
Data publikacji:
2023
Wydawca:
Wydawnictwo Uniwersytetu Ekonomicznego we Wrocławiu
Tematy:
doświadczenie klienta
rozpoznawanie emocji
automatyczna analiza ekspresji twarzy
FaceReader
pomiar emocji
customer experience
emotion recognition
automatic facial expression analysis
measuring emotions
Opis:
Badanie doświadczeń klienta wymaga rozwijania metodyki ich pomiaru pozwalającej na uwzględnienie ich złożoności. Jedną z ważnych składowych doświadczeń są emocje, których rozpoznawanie i pomiar stanowi wciąż wyzwanie dla badaczy. Celem artykułu jest dyskusja na temat metod i technik wykorzystywanych do rozpoznawania i pomiaru emocji w badaniach doświadczeń klienta. Szczególną uwagę poświęcono wykorzystaniu technik wywodzących się z neuronauki konsumenckiej, w tym dylematom związanym z sięganiem po automatyczną analizę ekspresji mimicznej. Studia literaturowe pozwoliły na dyskusję dotyczącą korzyści i ograniczeń stosowania automatycznej analizy ekspresji mimicznej w pomiarze doświadczeń klientów. Mimo ograniczeń, mogą one być traktowane jako atrakcyjne uzupełnienie metod i technik pozwalających na uchwycenie emocjonalnych komponentów doświadczenia klienta na różnych etapach (przed zakupem, w jego czasie i po nim).
The study of customer experience requires the development of methodologies which measure such experience and account for its complexity. One important component of customer experience is emotion, the recognition and measurement of which is still a challenge for researchers. The purpose of this article is to discuss methods and techniques used to recognise and measure emotions in customer experience research. Particular attention is paid to the use of techniques derived from consumer neuroscience, including the dilemmas associated with reaching for automatic analysis of facial expressions. The literature review is indicative of the ongoing discussion on the benefits and limitations of using the automatic analysis of facial expressions technique in measuring customer experience. Despite its limitations, such a technique can be an attractive complement to methods and techniques used to capture the emotional components of customer experience at different stages (before, during, and after purchase).
Źródło:
Prace Naukowe Uniwersytetu Ekonomicznego we Wrocławiu; 2023, 67, 5; 67-77
1899-3192
Pojawia się w:
Prace Naukowe Uniwersytetu Ekonomicznego we Wrocławiu
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Comparison of speaker dependent and speaker independent emotion recognition
Autorzy:
Rybka, J.
Janicki, A.
Powiązania:
https://bibliotekanauki.pl/articles/330055.pdf
Data publikacji:
2013
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
speech processing
emotion recognition
EMO-DB
support vector machines
artificial neural network
przetwarzanie mowy
rozpoznawanie emocji
maszyna wektorów wspierających
sztuczna sieć neuronowa
Opis:
This paper describes a study of emotion recognition based on speech analysis. The introduction to the theory contains a review of emotion inventories used in various studies of emotion recognition as well as the speech corpora applied, methods of speech parametrization, and the most commonly employed classification algorithms. In the current study the EMO-DB speech corpus and three selected classifiers, the k-Nearest Neighbor (k-NN), the Artificial Neural Network (ANN) and Support Vector Machines (SVMs), were used in experiments. SVMs turned out to provide the best classification accuracy of 75.44% in the speaker dependent mode, that is, when speech samples from the same speaker were included in the training corpus. Various speaker dependent and speaker independent configurations were analyzed and compared. Emotion recognition in speaker dependent conditions usually yielded higher accuracy results than a similar but speaker independent configuration. The improvement was especially well observed if the base recognition ratio of a given speaker was low. Happiness and anger, as well as boredom and neutrality, proved to be the pairs of emotions most often confused.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2013, 23, 4; 797-808
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł

Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies