Gelişmiş Arama

Basit öğe kaydını göster

dc.contributor.authorVerkholyak, Oxana Vladimirovna
dc.contributor.authorFedotov, Dmitrii
dc.contributor.authorKaya, Heysem
dc.contributor.authorZhang, Yang
dc.contributor.authorKarpov, Alexey A.
dc.date.accessioned2022-05-11T14:15:55Z
dc.date.available2022-05-11T14:15:55Z
dc.date.issued2019
dc.identifier.isbn978-1-4799-8131-1
dc.identifier.issn1520-6149
dc.identifier.urihttps://hdl.handle.net/20.500.11776/6119
dc.description44th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) -- MAY 12-17, 2019 -- Brighton, ENGLANDen_US
dc.description.abstractEmotions occur in complex social interactions, and thus processing of isolated utterances may not be sufficient to grasp the nature of underlying emotional states. Dialog speech provides useful information about context that explains nuances of emotions and their transitions. Context can be defined on different levels; this paper proposes a hierarchical context modelling approach based on RNN-LSTM architecture, which models acoustical context on the frame level and partner's emotional context on the dialog level. The method is proved effective together with cross-corpus training setup and domain adaptation technique in a set of speaker independent cross-validation experiments on IEMOCAP corpus for three levels of activation and valence classification. As a result, the state-of-the-art on this corpus is advanced for both dimensions using only acoustic modality.en_US
dc.description.sponsorshipInst Elect & Elect Engineers, Inst Elect & Elect Engineers Signal Proc Socen_US
dc.description.sponsorshipRussian Science FoundationRussian Science Foundation (RSF) [18-11-00145]; Huawei Innovation Research ProgramHuawei Technologiesen_US
dc.description.sponsorshipThe study is supported by the Russian Science Foundation (project No. 18-11-00145) and Huawei Innovation Research Program.en_US
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectEmotion recognitionen_US
dc.subjectcross-corpusen_US
dc.subjectcontext modellingen_US
dc.subjectdialog systemsen_US
dc.subjectLSTMen_US
dc.subjectCross-Corpusen_US
dc.subjectRecognitionen_US
dc.titleHierarchical Two-Level Modelling of Emotional States in Spoken Dialog Systemsen_US
dc.typeproceedingPaperen_US
dc.relation.ispartof2019 Ieee International Conference on Acoustics, Speech and Signal Processing (Icassp)en_US
dc.departmentFakülteler, Çorlu Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümüen_US
dc.authorid0000-0003-3424-652X
dc.authorid0000-0002-5583-0410
dc.authorid0000-0001-7947-5508
dc.identifier.startpage6700en_US
dc.identifier.endpage6704en_US
dc.institutionauthorKaya, Heysem
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
dc.authorwosidKarpov, Alexey A/A-8905-2012
dc.authorwosidFedotov, Dmitrii/AAE-1738-2019
dc.authorwosid????????, ??????/L-5818-2016
dc.identifier.wosWOS:000482554006186en_US


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster