Prosodic variation enhancement using unsupervised context labeling for HMM-based expressive speech synthesis

Yu Maeno, Takashi Nose, Takao Kobayashi, Tomoki Koriyama, Yusuke Ijima, Hideharu Nakajima, Hideyuki Mizuno, Osamu Yoshioka

研究成果: Article査読

11 被引用数 (Scopus)


This paper proposes an unsupervised labeling technique using phrase-level prosodic contexts for HMM-based expressive speech synthesis, which enables users to manually enhance prosodic variations of synthetic speech without degrading the naturalness. In the proposed technique, HMMs are first trained using the conventional labels including only linguistic information, and prosodic features are generated from the HMMs. The average difference of original and generated prosodic features for each accent phrase is then calculated and classified into three classes, e.g.; low, neutral, and high in the case of fundamental frequency. The created prosodic context label has a practical meaning such as high/low of relative pitch at the phrase level, and hence it is expected that users can modify the prosodic characteristic of synthetic speech in an intuitive way by manually changing the proposed labels. In the experiments, we evaluate the proposed technique in both ideal and practical conditions using speech of sales talk and fairy tale recorded under a realistic domain. In the evaluation under the practical condition, we evaluate whether the users achieve their intended prosodic modification by changing the proposed context label of a certain accent phrase for a given sentence.

ジャーナルSpeech Communication
出版ステータスPublished - 2014

ASJC Scopus subject areas

  • ソフトウェア
  • モデリングとシミュレーション
  • 通信
  • 言語および言語学
  • 言語学および言語
  • コンピュータ ビジョンおよびパターン認識
  • コンピュータ サイエンスの応用


「Prosodic variation enhancement using unsupervised context labeling for HMM-based expressive speech synthesis」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。