Building HMM-TTS voices on diverse data

Vincent Wan, Javier Latorre, Kayoko Yanagisawa, Norbert Braunschweiler, Langzhou Chen, Mark J.F. Gales, Masami Akamine

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

The statistical models of hidden Markov model based text-to-speech (HMM-TTS) systems are typically built using homogeneous data. It is possible to acquire data from many different sources but combining them leads to a non-homogeneous or diverse dataset. This paper describes the application of average voice models (AVMs) and a novel application of cluster adaptive training (CAT) with multiple context dependent decision trees to create HMM-TTS voices using diverse data: speech data recorded in studios mixed with speech data obtained from the internet. Training AVM and CAT models on diverse data yields better quality speech than training on high quality studio data alone. Tests show that CAT is able to create a voice for a target speaker with as little as 7 seconds; an AVM would need more data to reach the same level of similarity to target speaker. Tests also show that CAT produces higher quality voices than AVMs irrespective of the amount of adaptation data. Lastly, it is shown that it is beneficial to model the data using multiple context clustering decision trees.

Original languageEnglish
Article number6687250
Pages (from-to)296-306
Number of pages11
JournalIEEE Journal on Selected Topics in Signal Processing
Volume8
Issue number2
DOIs
Publication statusPublished - 2014 Apr 1
Externally publishedYes

Keywords

  • Average voice models
  • cluster adaptive training
  • speaker adaptation
  • speech synthesis

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Building HMM-TTS voices on diverse data'. Together they form a unique fingerprint.

Cite this