Verkholyak, Oxana VladimirovnaFedotov, DmitriiKaya, HeysemZhang, YangKarpov, Alexey A.2022-05-112022-05-112019978-1-4799-8131-11520-6149https://hdl.handle.net/20.500.11776/611944th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) -- MAY 12-17, 2019 -- Brighton, ENGLANDEmotions occur in complex social interactions, and thus processing of isolated utterances may not be sufficient to grasp the nature of underlying emotional states. Dialog speech provides useful information about context that explains nuances of emotions and their transitions. Context can be defined on different levels; this paper proposes a hierarchical context modelling approach based on RNN-LSTM architecture, which models acoustical context on the frame level and partner's emotional context on the dialog level. The method is proved effective together with cross-corpus training setup and domain adaptation technique in a set of speaker independent cross-validation experiments on IEMOCAP corpus for three levels of activation and valence classification. As a result, the state-of-the-art on this corpus is advanced for both dimensions using only acoustic modality.eninfo:eu-repo/semantics/closedAccessEmotion recognitioncross-corpuscontext modellingdialog systemsLSTMCross-CorpusRecognitionHierarchical Two-Level Modelling of Emotional States in Spoken Dialog SystemsConference Object67006704N/AWOS:0004825540061862-s2.0-85068970731N/A