Gelişmiş Arama

Basit öğe kaydını göster

dc.contributor.authorVerkholyak, Oxana Vladimirovna
dc.contributor.authorKaya, Heysem
dc.contributor.authorKarpov, Alexey A.
dc.date.accessioned2022-05-11T14:15:55Z
dc.date.available2022-05-11T14:15:55Z
dc.date.issued2019
dc.identifier.issn2078-9181
dc.identifier.urihttps://doi.org/10.15622/sp.18.1.30-56
dc.identifier.urihttps://hdl.handle.net/20.500.11776/6120
dc.description.abstractRecently, Speech Emotion Recognition (SER) has become an important research topic of affective computing. It is a difficult problem, where some of the greatest challenges lie in the feature selection and representation tasks. A good feature representation should be able to reflect global trends as well as temporal structure of the signal, since emotions naturally evolve in time; it has become possible with the advent of Recurrent Neural Networks (RNN), which are actively used today for various sequence modeling tasks. This paper proposes a hybrid approach to feature representation, which combines traditionally engineered statistical features with Long Short-Term Memory (LSTM) sequence representation in order to take advantage of both short-term and long-term acoustic characteristics of the signal, therefore capturing not only the general trends but also temporal structure of the signal. The evaluation of the proposed method is done on three publicly available acted emotional speech corpora in three different languages, namely RUSLANA (Russian speech), BUEMODB (Turkish speech) and EMODB (German speech). Compared to the traditional approach, the results of our experiments show an absolute improvement of 2.3% and 2.8% for two out of three databases, and a comparative performance on the third. Therefore, provided enough training data, the proposed method proves effective in modelling emotional content of speech utterances. © 2019 St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences. All rights reserved.en_US
dc.description.sponsorshipRussian Science Foundation, RSF: 18-11-00145en_US
dc.description.sponsorshipThis research is supported by the Russian Science Foundation (project ? 18-11-00145).en_US
dc.description.sponsorshipThis research is supported by the Russian Science Foundation (projecten_US
dc.language.isoengen_US
dc.publisherSt. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciencesen_US
dc.identifier.doi10.15622/sp.18.1.30-56
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectAffective computingen_US
dc.subjectArtificial neural networksen_US
dc.subjectComputational paralinguisticsen_US
dc.subjectContext modellingen_US
dc.subjectFeature representationen_US
dc.subjectLong short-term memoryen_US
dc.subjectSpeech emotion recognitionen_US
dc.titleModeling short-term and long-term dependencies of the speech signal for paralinguistic emotion classificationen_US
dc.typearticleen_US
dc.relation.ispartofSPIIRAS Proceedingsen_US
dc.departmentFakülteler, Çorlu Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümüen_US
dc.identifier.volume18en_US
dc.identifier.issue1en_US
dc.identifier.startpage30en_US
dc.identifier.endpage56en_US
dc.institutionauthorKaya, Heysem
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.authorscopusid57199057349
dc.authorscopusid36241785000
dc.authorscopusid57219469958
dc.identifier.scopus2-s2.0-85063340623en_US


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster