Multi-modal voice activity detection by embedding image features into speech signal

Yohei Abe, Akinori Ito

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Lip movement has a close relationship with speech because the lips move when we talk. The idea behind this work is to extract the lip movement feature from the facial video and embed the movement feature into speech signal using information hiding technique. Using the proposed framework, we can provide advanced speech communication only using the speech signal that includes lip movement features, without increasing the bitrate of the signal. In this paper, we show the basic framework of the method and apply the proposal method to multi-modal voice activity detection (VAD). As a result of detection experiment using the support vector machine, we obtained better performance than the audio-only VAD in a noisy environment. In addition, we investigated how data embedding into speech signal affects sound quality and detection performance.

Original languageEnglish
Title of host publicationProceedings - 2013 9th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IIH-MSP 2013
PublisherIEEE Computer Society
Pages271-274
Number of pages4
ISBN (Print)9780769551203
DOIs
Publication statusPublished - 2013 Jan 1
Event9th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IIH-MSP 2013 - Beijing, China
Duration: 2013 Oct 162013 Oct 18

Other

Other9th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IIH-MSP 2013
Country/TerritoryChina
CityBeijing
Period13/10/1613/10/18

Keywords

  • audio-visual
  • information hiding
  • multi-modal
  • voice activity detection (VAD)

ASJC Scopus subject areas

  • Artificial Intelligence
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Multi-modal voice activity detection by embedding image features into speech signal'. Together they form a unique fingerprint.

Cite this