Karslioglu, N.Timar, Y.Salah, Albert AliKaya, Heysem2022-05-112022-05-1120171613-0073https://hdl.handle.net/20.500.11776/60912017 Multimedia Benchmark Workshop, MediaEval 2017 -- 13 September 2017 through 15 September 2017 -- -- 131670In this paper, we present our approach for the Emotional Impact of Movies task of Mediaeval 2017 Challenge, involving multimodal fusion for predicting arousal and valence for movie clips. In our system, we have two pipelines. In the first one, we extracted audio/visual features, and used a combination of PCA, Fisher vector encoding, feature selection, and extreme learning machine classifiers. In the second one, we focused on the classifiers, rather than on feature selection. © 2017 Author/owner(s).eninfo:eu-repo/semantics/closedAccessLearning systemsMotion picturesExtreme learning machineFisher vectorsMovie clipsMulti-modal fusionFeature extractionBOUN-NKU in mediaeval 2017 emotional impact of movies taskConference Object19842-s2.0-85034951892N/A