Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "emotion recognition" wg kryterium: Temat


Tytuł:
Speech emotion recognition based on sparse representation
Autorzy:
Yan, J.
Wang, X.
Gu, W.
Ma, L.
Powiązania:
https://bibliotekanauki.pl/articles/177778.pdf
Data publikacji:
2013
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
speech emotion recognition
sparse partial least squares regression SPLSR
SPLSR
feature selection and dimensionality reduction
Opis:
Speech emotion recognition is deemed to be a meaningful and intractable issue among a number of do- mains comprising sentiment analysis, computer science, pedagogy, and so on. In this study, we investigate speech emotion recognition based on sparse partial least squares regression (SPLSR) approach in depth. We make use of the sparse partial least squares regression method to implement the feature selection and dimensionality reduction on the whole acquired speech emotion features. By the means of exploiting the SPLSR method, the component parts of those redundant and meaningless speech emotion features are lessened to zero while those serviceable and informative speech emotion features are maintained and selected to the following classification step. A number of tests on Berlin database reveal that the recogni- tion rate of the SPLSR method can reach up to 79.23% and is superior to other compared dimensionality reduction methods.
Źródło:
Archives of Acoustics; 2013, 38, 4; 465-470
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Music Playlist Generation using Facial Expression Analysis and Task Extraction
Autorzy:
Sen, A.
Popat, D.
Shah, H.
Kuwor, P.
Johri, E.
Powiązania:
https://bibliotekanauki.pl/articles/908868.pdf
Data publikacji:
2016
Wydawca:
Uniwersytet Marii Curie-Skłodowskiej. Wydawnictwo Uniwersytetu Marii Curie-Skłodowskiej
Tematy:
facial expression analysis
emotion recognition
feature extraction
viola jones face detection
gabor filter
adaboost
k-NN algorithm
task extraction
music classification
playlist generation
Opis:
In day to day stressful environment of IT Industry, there is a truancy for the appropriate relaxation time for all working professionals. To keep a person stress free, various technical or non-technical stress releasing methods are now being adopted. We can categorize the people working on computers as administrators, programmers, etc. each of whom require varied ways in order to ease themselves. The work pressure and the vexation of any kind for a person can be depicted by their emotions. Facial expressions are the key to analyze the current psychology of the person. In this paper, we discuss a user intuitive smart music player. This player will capture the facial expressions of a person working on the computer and identify the current emotion. Intuitively the music will be played for the user to relax them. The music player will take into account the foreground processes which the person is executing on the computer. Since various sort of music is available to boost one's enthusiasm, taking into consideration the tasks executed on the system by the user and the current emotions they carry, an ideal playlist of songs will be created and played for the person. The person can browse the playlist and modify it to make the system more flexible. This music player will thus allow the working professionals to stay relaxed in spite of their workloads.
Źródło:
Annales Universitatis Mariae Curie-Skłodowska. Sectio AI, Informatica; 2016, 16, 2; 1-6
1732-1360
2083-3628
Pojawia się w:
Annales Universitatis Mariae Curie-Skłodowska. Sectio AI, Informatica
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Comparison of speaker dependent and speaker independent emotion recognition
Autorzy:
Rybka, J.
Janicki, A.
Powiązania:
https://bibliotekanauki.pl/articles/330055.pdf
Data publikacji:
2013
Wydawca:
Uniwersytet Zielonogórski. Oficyna Wydawnicza
Tematy:
speech processing
emotion recognition
EMO-DB
support vector machines
artificial neural network
przetwarzanie mowy
rozpoznawanie emocji
maszyna wektorów wspierających
sztuczna sieć neuronowa
Opis:
This paper describes a study of emotion recognition based on speech analysis. The introduction to the theory contains a review of emotion inventories used in various studies of emotion recognition as well as the speech corpora applied, methods of speech parametrization, and the most commonly employed classification algorithms. In the current study the EMO-DB speech corpus and three selected classifiers, the k-Nearest Neighbor (k-NN), the Artificial Neural Network (ANN) and Support Vector Machines (SVMs), were used in experiments. SVMs turned out to provide the best classification accuracy of 75.44% in the speaker dependent mode, that is, when speech samples from the same speaker were included in the training corpus. Various speaker dependent and speaker independent configurations were analyzed and compared. Emotion recognition in speaker dependent conditions usually yielded higher accuracy results than a similar but speaker independent configuration. The improvement was especially well observed if the base recognition ratio of a given speaker was low. Happiness and anger, as well as boredom and neutrality, proved to be the pairs of emotions most often confused.
Źródło:
International Journal of Applied Mathematics and Computer Science; 2013, 23, 4; 797-808
1641-876X
2083-8492
Pojawia się w:
International Journal of Applied Mathematics and Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Music Mood Visualization Using Self-Organizing Maps
Autorzy:
Plewa, M.
Kostek, B.
Powiązania:
https://bibliotekanauki.pl/articles/176410.pdf
Data publikacji:
2015
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
music mood
music parameterization
MER (Music Emotion Recognition)
MIR (Music Information Retrieval)
Multidimensional Scaling (MDS)
principal component analysis (PCA)
Self-Organizing Maps (SOM)
ANN (Artificial Neural Networks)
Opis:
Due to an increasing amount of music being made available in digital form in the Internet, an automatic organization of music is sought. The paper presents an approach to graphical representation of mood of songs based on Self-Organizing Maps. Parameters describing mood of music are proposed and calculated and then analyzed employing correlation with mood dimensions based on the Multidimensional Scaling. A map is created in which music excerpts with similar mood are organized next to each other on the two-dimensional display.
Źródło:
Archives of Acoustics; 2015, 40, 4; 513-525
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Acoustic Methods in Identifying Symptoms of Emotional States
Autorzy:
Piątek, Zuzanna
Kłaczyński, Maciej
Powiązania:
https://bibliotekanauki.pl/articles/1953482.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czasopisma i Monografie PAN
Tematy:
emotion recognition
speech signal processing
clustering analysis
Sammon mapping
Opis:
The study investigates the use of speech signal to recognise speakers’ emotional states. The introduction includes the definition and categorization of emotions, including facial expressions, speech and physiological signals. For the purpose of this work, a proprietary resource of emotionally-marked speech recordings was created. The collected recordings come from the media, including live journalistic broadcasts, which show spontaneous emotional reactions to real-time stimuli. For the purpose of signal speech analysis, a specific script was written in Python. Its algorithm includes the parameterization of speech recordings and determination of features correlated with emotional content in speech. After the parametrization process, data clustering was performed to allows for the grouping of feature vectors for speakers into greater collections which imitate specific emotional states. Using the t-Student test for dependent samples, some descriptors were distinguished, which identified significant differences in the values of features between emotional states. Some potential applications for this research were proposed, as well as other development directions for future studies of the topic.
Źródło:
Archives of Acoustics; 2021, 46, 2; 259-269
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
The Acoustic Cues of Fear : Investigation of Acoustic Parameters of Speech Containing Fear
Autorzy:
Özseven, T.
Powiązania:
https://bibliotekanauki.pl/articles/178133.pdf
Data publikacji:
2018
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
emotion recognition
acoustic analysis
fear
speech processing
Opis:
Speech emotion recognition is an important part of human-machine interaction studies. The acoustic analysis method is used for emotion recognition through speech. An emotion does not cause changes on all acoustic parameters. Rather, the acoustic parameters affected by emotion vary depending on the emotion type. In this context, the emotion-based variability of acoustic parameters is still a current field of study. The purpose of this study is to investigate the acoustic parameters that fear affects and the extent of their influence. For this purpose, various acoustic parameters were obtained from speech records containing fear and neutral emotions. The change according to the emotional states of these parameters was analyzed using statistical methods, and the parameters and the degree of influence that the fear emotion affected were determined. According to the results obtained, the majority of acoustic parameters that fear affects vary according to the used data. However, it has been demonstrated that formant frequencies, mel-frequency cepstral coefficients, and jitter parameters can define the fear emotion independent of the data used.
Źródło:
Archives of Acoustics; 2018, 43, 2; 245-251
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Facial emotion recognition using average face ratios and fuzzy hamming distance
Autorzy:
Ounachad, Khalid
Oualla, Mohamed
Sadiq, Abdelalim
Souhar, Abdelghani
Powiązania:
https://bibliotekanauki.pl/articles/2141894.pdf
Data publikacji:
2020
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
average face ratios
facial emotion recognition
fuzzy hamming distance
perfect face ratios
Opis:
Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Nowadays, emotional factors are important as classic functional aspects of customer purchasing behavior. Purchasing choices and decisions making are the result of a careful analysis of the product advantages and disadvantages and of affective and emotional aspects. This paper presents a novel method for human emotion classification and recognition. We generate seven referential faces suitable for each kind of facial emotion based on perfect face ratios and some classical averages. The basic idea is to extract perfect face ratios for emotional face and for each referential face as features and calculate the distance between them by using fuzzy hamming distance. To extract perfect face ratios, we use the point landmarks in the face then sixteen features will be extract. An experimental evaluation demonstrates the satisfactory performance of our approach on WSEFEP dataset. It can be applied with any existing facial emotion dataset. The proposed algorithm will be a competitor of the other proposed relative approaches. The recognition rate reaches more than 90%.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2020, 14, 4; 37-44
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Rozpoznawanie emocji w tekstach polskojęzycznych z wykorzystaniem metody słów kluczowych
Emotion recognition in polish texts based on keywords detection method
Autorzy:
Nowaczyk, A.
Jackowska-Strumiłło, L.
Powiązania:
https://bibliotekanauki.pl/articles/408760.pdf
Data publikacji:
2017
Wydawca:
Politechnika Lubelska. Wydawnictwo Politechniki Lubelskiej
Tematy:
rozpoznawanie emocji
interakcja człowiek-komputer
przetwarzanie języka naturalnego
przetwarzanie tekstów
emotion recognition
human-computer interaction
natural language processing
text processing
Opis:
Dynamiczny rozwój sieci społecznościowych sprawił, że Internet stał się najpopularniejszym medium komunikacyjnym. Zdecydowana większość komunikatów wymieniana jest w postaci widomości tekstowych, które niejednokrotnie odzwierciedlają stan emocjonalny autora. Identyfikacja emocji w tekstach znajduje szerokie zastosowanie w handlu elektronicznym, czy telemedycynie, stając się jednocześnie ważnym elementem w komunikacji. człowiek-komputer. W niniejszym artykule zaprezentowano metodę rozpoznawania emocji w tekstach polskojęzycznych opartą o algorytm detekcji słów kluczowych i lematyzację. Uzyskano dokładność rzędu 60%. Opracowano również pierwszą polskojęzyczną bazę słów kluczowych wyrażających emocje.
Dynamic development of social networks caused that the Internet has become the most popular communication medium. A vast majority of the messages are exchanged in text format and very often reflect authors’ emotional states. Detection of the emotions in text is widely used in e-commerce or telemedicine becoming the milestone in the field of human-computer interaction. The paper presents a method of emotion recognition in Polish-language texts based on the keywords detection algorithm with lemmatization. The obtained accuracy is about 60%. The first Polish-language database of keywords expressing emotions has been also developed.
Źródło:
Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska; 2017, 7, 2; 102-105
2083-0157
2391-6761
Pojawia się w:
Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Analysis of Features and Classifiers in Emotion Recognition Systems : Case Study of Slavic Languages
Autorzy:
Nedeljković, Željko
Milošević, Milana
Ðurović, Željko
Powiązania:
https://bibliotekanauki.pl/articles/176678.pdf
Data publikacji:
2020
Wydawca:
Polska Akademia Nauk. Czasopisma i Monografie PAN
Tematy:
emotion recognition
speech processing
classification algorithms
Opis:
Today’s human-computer interaction systems have a broad variety of applications in which automatic human emotion recognition is of great interest. Literature contains many different, more or less successful forms of these systems. This work emerged as an attempt to clarify which speech features are the most informative, which classification structure is the most convenient for this type of tasks, and the degree to which the results are influenced by database size, quality and cultural characteristic of a language. The research is presented as the case study on Slavic languages.
Źródło:
Archives of Acoustics; 2020, 45, 1; 129-140
0137-5075
Pojawia się w:
Archives of Acoustics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Using BCI and EEG to process and analyze driver’s brain activity signals during VR simulation
Autorzy:
Nader, Mirosław
Jacyna-Gołda, Ilona
Nader, Stanisław
Nehring, Karol
Powiązania:
https://bibliotekanauki.pl/articles/2067410.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
signal processing
EEG
BCI
emotion recognition
driver
virtual reality
przetwarzanie sygnałów
rozpoznawanie emocji
kierowca
symulacja wirtualna
Opis:
The use of popular brain–computer interfaces (BCI) to analyze signals and the behavior of brain activity is a very current problem that is often undertaken in various aspects by many researchers. This comparison turns out to be particularly useful when studying the flows of information and signals in the human-machine-environment system, especially in the field of transportation sciences. This article presents the results of a pilot study of driver behavior with the use of a pro-prietary simulator based on Virtual Reality technology. The study uses the technology of studying signals emitted by the human mind and its specific zones in response to given environmental factors. A solution based on virtual reality with the limitation of external stimuli emitted by the real world was proposed, and computational analysis of the obtained data was performed. The research focused on traffic situations and how they affect the subject. The test was attended by representatives of various age groups, both with and without a driving license. This study presents an original functional model of a research stand in VR technology that we designed and built. Testing in VR conditions allows to limit the influence of undesirable external stimuli that may distort the results of readings. At the same time, it increases the range of road events that can be simulated without generating any risk for the participant. In the presented studies, the BCI was used to assess the driver's behavior, which allows for the activity of selected brain waves of the examined person to be registered. Electro-encephalogram (EEG) was used to study the activity of brain and its response to stimuli coming from the Virtual Reality created environment. Electrical activity detection is possible thanks to the use of electrodes placed on the skin in selected areas of the skull. The structure of the proprietary test-stand for signal and information flow simulation tests, which allows for the selection of measured signals and the method of parameter recording, is presented. An important part of this study is the presentation of the results of pilot studies obtained in the course of real research on the behavior of a car driver.
Źródło:
Archives of Transport; 2021, 60, 4; 137--153
0866-9546
2300-8830
Pojawia się w:
Archives of Transport
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Speech emotion recognition using wavelet packet reconstruction with attention-based deep recurrent neutral networks
Autorzy:
Meng, Hao
Yan, Tianhao
Wei, Hongwei
Ji, Xun
Powiązania:
https://bibliotekanauki.pl/articles/2173587.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
speech emotion recognition
voice activity detection
wavelet packet reconstruction
feature extraction
LSTM networks
attention mechanism
rozpoznawanie emocji mowy
wykrywanie aktywności głosowej
rekonstrukcja pakietu falkowego
wyodrębnianie cech
mechanizm uwagi
sieć LSTM
Opis:
Speech emotion recognition (SER) is a complicated and challenging task in the human-computer interaction because it is difficult to find the best feature set to discriminate the emotional state entirely. We always used the FFT to handle the raw signal in the process of extracting the low-level description features, such as short-time energy, fundamental frequency, formant, MFCC (mel frequency cepstral coefficient) and so on. However, these features are built on the domain of frequency and ignore the information from temporal domain. In this paper, we propose a novel framework that utilizes multi-layers wavelet sequence set from wavelet packet reconstruction (WPR) and conventional feature set to constitute mixed feature set for achieving the emotional recognition with recurrent neural networks (RNN) based on the attention mechanism. In addition, the silent frames have a disadvantageous effect on SER, so we adopt voice activity detection of autocorrelation function to eliminate the emotional irrelevant frames. We show that the application of proposed algorithm significantly outperforms traditional features set in the prediction of spontaneous emotional states on the IEMOCAP corpus and EMODB database respectively, and we achieve better classification for both speaker-independent and speaker-dependent experiment. It is noteworthy that we acquire 62.52% and 77.57% accuracy results with speaker-independent (SI) performance, 66.90% and 82.26% accuracy results with speaker-dependent (SD) experiment in final.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2021, 69, 1; art. no. e136300
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Speech emotion recognition using wavelet packet reconstruction with attention-based deep recurrent neutral networks
Autorzy:
Meng, Hao
Yan, Tianhao
Wei, Hongwei
Ji, Xun
Powiązania:
https://bibliotekanauki.pl/articles/2090711.pdf
Data publikacji:
2021
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
speech emotion recognition
voice activity detection
wavelet packet reconstruction
feature extraction
LSTM networks
attention mechanism
rozpoznawanie emocji mowy
wykrywanie aktywności głosowej
rekonstrukcja pakietu falkowego
wyodrębnianie cech
mechanizm uwagi
sieć LSTM
Opis:
Speech emotion recognition (SER) is a complicated and challenging task in the human-computer interaction because it is difficult to find the best feature set to discriminate the emotional state entirely. We always used the FFT to handle the raw signal in the process of extracting the low-level description features, such as short-time energy, fundamental frequency, formant, MFCC (mel frequency cepstral coefficient) and so on. However, these features are built on the domain of frequency and ignore the information from temporal domain. In this paper, we propose a novel framework that utilizes multi-layers wavelet sequence set from wavelet packet reconstruction (WPR) and conventional feature set to constitute mixed feature set for achieving the emotional recognition with recurrent neural networks (RNN) based on the attention mechanism. In addition, the silent frames have a disadvantageous effect on SER, so we adopt voice activity detection of autocorrelation function to eliminate the emotional irrelevant frames. We show that the application of proposed algorithm significantly outperforms traditional features set in the prediction of spontaneous emotional states on the IEMOCAP corpus and EMODB database respectively, and we achieve better classification for both speaker-independent and speaker-dependent experiment. It is noteworthy that we acquire 62.52% and 77.57% accuracy results with speaker-independent (SI) performance, 66.90% and 82.26% accuracy results with speaker-dependent (SD) experiment in final.
Źródło:
Bulletin of the Polish Academy of Sciences. Technical Sciences; 2021, 69, 1; e136300, 1--12
0239-7528
Pojawia się w:
Bulletin of the Polish Academy of Sciences. Technical Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Communication atmosphere in humans and robots interaction based on the concept of fuzzy atmosfield generated by emotional states of humans and robots
Autorzy:
Liu, Z. T.
Chen, L. F.
Dong, F. Y.
Hirota, K.
Min, W.
Li, D. Y.
Yamazaki, Y.
Powiązania:
https://bibliotekanauki.pl/articles/384920.pdf
Data publikacji:
2013
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
human-robot interaction
communication atmosphere
fuzzy logic
emotion recognition
Opis:
Communication atmosphere based on emotional states of humans and robots is modeled by using Fuzzy Atmosfield (FA), where the human emotion is estimated from bimodal communication cues (i.e., speech and gesture) using weighted fusion and fuzzy logic, and the robot emotion is generated by emotional expression synthesis. It makes possible to quantitatively express overall affective expression of individuals, and helps to facilitate smooth communication in humans-robots interaction. Experiments in a household environment are performed by four humans and five eye robots, where emotion recognition of humans based on bimodal cues achieves 84% accuracy in average, improved by about 10% compared to that using only speech. Experimental results from the model of communication atmosphere based on the FA are evaluated by comparing with questionnaire surveys, from which the maximum error of 0.25 and the minimum correlation coefficient of 0.72 for three axes in the FA confirm the validity of the proposal. In ongoing work, an atmosphere representation system is being planned for casual communication between humans and robots, taking into account multiple emotional modalities such as speech, gesture, and music.
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2013, 7, 2; 52-63
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Emotion monitoring – verification of physiological characteristics measurement procedures
Autorzy:
Landowska, A.
Powiązania:
https://bibliotekanauki.pl/articles/220577.pdf
Data publikacji:
2014
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
affective computing
emotion recognition
physiology
motion artifacts
sensor location
Opis:
This paper concerns measurement procedures on an emotion monitoring stand designed for tracking human emotions in the Human-Computer Interaction with physiological characteristics. The paper addresses the key problem of physiological measurements being disturbed by a motion typical for human-computer interaction such as keyboard typing or mouse movements. An original experiment is described, that aimed at practical evaluation of measurement procedures performed at the emotion monitoring stand constructed at GUT. Different locations of sensors were considered and evaluated for suitability and measurement precision in the Human- Computer Interaction monitoring. Alternative locations (ear lobes and forearms) for skin conductance, blood volume pulse and temperature sensors were proposed and verified. Alternative locations proved correlation with traditional locations as well as lower sensitiveness to movements like typing or mouse moving, therefore they can make a better solution for monitoring the Human-Computer Interaction.
Źródło:
Metrology and Measurement Systems; 2014, 21, 4; 719-732
0860-8229
Pojawia się w:
Metrology and Measurement Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Rozumienie mowy jako moderator związku rozpoznawania emocji z nasileniem symptomów zaburzeń ze spektrum autyzmu (ASD)
Autorzy:
Krzysztofik, Karolina
Powiązania:
https://bibliotekanauki.pl/articles/2054372.pdf
Data publikacji:
2021
Wydawca:
Uniwersytet Marii Curie-Skłodowskiej. Wydawnictwo Uniwersytetu Marii Curie-Skłodowskiej
Tematy:
autism spectrum disorder
ASD
emotion recognition
speech comprehension
zaburzenia ze spektrum autyzmu
rozpoznawanie emocji
rozumienie mowy
Opis:
Współcześni badacze podkreślają konsekwencje trudności osób z zaburzeniami ze spektrum autyzmu (Autism Spectrum Disorder, ASD) w rozpoznawaniu emocji dla nasilenia symptomów tego zaburzenia. Jednocześnie wiele z osób z ASD potrafi rozpoznawać emocje innych osób dzięki strategiom kompensacyjnym opartym na relatywnie dobrze rozwiniętych kompetencjach poznawczych i językowych. Wydaje się zatem, że umiejętności językowe osób z ASD mogą moderować związek rozpoznawania emocji z nasileniem symptomów ASD. Celem prezentowanych badań było ustalenie, czy poziom rozumienia mowy osób z ASD moderuje związek rozpoznawania emocji z nasileniem symptomów ASD. Przebadano grupę 63 dzieci z ASD w wieku od 3 lat i 7 miesięcy do 9 lat i 3 miesiący, wykorzystując następujące narzędzia: Skalę Nasilenia Symptomów ASD, podskalę Rozpoznawanie Emocji ze Skali Mechanizmu Teorii Umysłu oraz podskalę Rozumienie Mowy ze skali Iloraz Inteligencji i Rozwoju dla Dzieci w Wieku Przedszkolnym (IDS-P). Uzyskane wyniki wskazują, że poziom rozumienia mowy moderuje związek poziomu rozwoju rozpoznawania emocji z nasileniem symptomów ASD w zakresie deficytów w komunikowaniu i interakcjach. Wyniki te znajdują swoje implikacje dla włączenia terapii rozumienia mowy w proces rehabilitacji osób z ASD, a także dla teoretycznej refleksji nad uwarunkowaniami nasilenia symptomów ASD.
Contemporary researchers underline consequences of difficulties in emotion recognition experienced by persons with autism spectrum disorder (ASD) for severity of symptoms of this disorder. Individuals with ASD, when trying to recognize the emotional states of others, often use compensatory strategies based on relatively well-developed cognitive and linguistic competences. Thus, the relationship between the recognition of emotions and the severity of ASD symptoms may be moderated by linguistic competencies. Own research was aimed at determining if the level of speech comprehension moderates the relationship between emotion recognition and ASD symptom severity. Participants were 63 children with ASD aged from 3 years and 7 months to 9 years and 3 months. The following tools were used: ASD Symptom Severity Scale, the Emotion Recognition subscale of the Theory of Mind Scale and the Speech Comprehension subscale from the Intelligence and Development Scales – Preschool (IDS-P). The results indicate that the level of speech comprehension moderates the relationship between the level of emotion recognition and ASD symptom severity in the range of deficits in communication and interaction. These results have implications for integrating speech comprehension therapy into the process of the rehabilitation of individuals with ASD, as well as for theoretical reflection concerning the determinants of ASD symptom severity.
Źródło:
Annales Universitatis Mariae Curie-Skłodowska, sectio J – Paedagogia-Psychologia; 2021, 34, 3; 199-219
0867-2040
Pojawia się w:
Annales Universitatis Mariae Curie-Skłodowska, sectio J – Paedagogia-Psychologia
Dostawca treści:
Biblioteka Nauki
Artykuł

Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies