TUDCL at MediaEval 2013 Violent Scenes Detection: Training with multi-modal features by MKL

Shinichi Goto, Terumasa Aoki

Research output: Contribution to journalConference articlepeer-review

1 Citation (Scopus)

Abstract

The purpose of this paper is to describe the work carried out for the Violent Scenes Detection task at MediaEval 2013 by team TUDCL. Our work is based on the combination of visual, temporal and audio features with machine learning at segment-level. Block-saliency-map based dense trajectory is proposed for visual and temporal features, and MFCC and delta-MFCC is used for audio features. For the classification, Multiple Kernel Learning is applied, which is effective if multi-modal features exist.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume1043
Publication statusPublished - 2013 Jan 1
Event2013 Multimedia Benchmark Workshop, MediaEval 2013 - Barcelona, Spain
Duration: 2013 Oct 182013 Oct 19

ASJC Scopus subject areas

  • Computer Science(all)

Fingerprint Dive into the research topics of 'TUDCL at MediaEval 2013 Violent Scenes Detection: Training with multi-modal features by MKL'. Together they form a unique fingerprint.

Cite this