Abstract
This paper describes a model adaptation technique for emotional speech recognition based on multiple-regression HMM (MR-HMM).We use a low-dimensional vector called style vector which corresponds the degree of expressivity of emotional speech as the explanatory variable of the regression. In the proposed technique, first, the value of the style vector for input speech is estimated. Then, using the estimated style vector, new mean vectors of the output distributions of HMM are adapted to the input style. The style vector is estimated every input utterance, and an on-line adaptation can be done in each utterance. We perform phoneme recognition experiments for professional narrators' acted speech and evaluate the performance by comparing with style-dependent and style-independent HMMs. Experimental results show the proposed technique reduced the error rates by 11% of the style-independent model.
Original language | English |
---|---|
Pages (from-to) | 1297-1300 |
Number of pages | 4 |
Journal | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH |
Publication status | Published - 2008 Dec 1 |
Externally published | Yes |
Event | INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association - Brisbane, QLD, Australia Duration: 2008 Sep 22 → 2008 Sep 26 |
Keywords
- Emotional speech
- Multiple-regression HMM
- Speaking style
- Style estimation
ASJC Scopus subject areas
- Human-Computer Interaction
- Signal Processing
- Software
- Sensory Systems