抄録
Lip movement has a close relationship with speech because the lips move when we talk. The idea behind this work is to extract the lip movement feature from the facial video and embed the movement feature into speech signal using information hiding technique. Using the proposed framework, we can provide advanced speech communication only using the speech signal that includes lip movement features, without increasing the bitrate of the signal. In this paper, we show the basic framework of the method and apply the proposal method to multi-modal voice activity detection (VAD). As a result of detection experiment using the support vector machine, we obtained better performance than the audio-only VAD in a noisy environment. In addition, we investigated how data embedding into speech signal affects sound quality and detection performance.
本文言語 | English |
---|---|
ホスト出版物のタイトル | Proceedings - 2013 9th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IIH-MSP 2013 |
出版社 | IEEE Computer Society |
ページ | 271-274 |
ページ数 | 4 |
ISBN(印刷版) | 9780769551203 |
DOI | |
出版ステータス | Published - 2013 1 1 |
イベント | 9th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IIH-MSP 2013 - Beijing, China 継続期間: 2013 10 16 → 2013 10 18 |
Other
Other | 9th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IIH-MSP 2013 |
---|---|
Country | China |
City | Beijing |
Period | 13/10/16 → 13/10/18 |
ASJC Scopus subject areas
- Artificial Intelligence
- Information Systems
- Signal Processing