Learning lexicons from spoken utterances based on statistical model selection

Ryo Taguchi, Naoto Iwahashi, Kotaro Funakoshi, Mikio Nakano, Takashi Nose, Tsuneo Nitta

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

This paper proposes a method for the unsupervised learning of lexicons from pairs of a spoken utterance and an object as its meaning under the condition that any priori linguistic knowledge other than acoustic models of Japanese phonemes is not used. The main problems are the word segmentation of spoken utterances and the learning of the phoneme sequences of the words. To obtain a lexicon, a statistical model, which represents the joint probability of an utterance and an object, is learned based on the minimum description length (MDL) principle. The model consists of three parts: a word list in which each word is represented by a phoneme sequence, a word-bigram model, and a word-meaning model. Through alternate learning processes of these parts, acoustically, grammatically, and semantically appropriate units of phoneme sequences that cover all utterances are acquired as words. Experimental results show that our model can acquire phoneme sequences of object words with about 83.6% accuracy.

Original languageEnglish
Pages (from-to)549-559
Number of pages11
JournalTransactions of the Japanese Society for Artificial Intelligence
Volume25
Issue number4
DOIs
Publication statusPublished - 2010
Externally publishedYes

Keywords

  • Lexical learning
  • Minimum description length
  • Speech processing
  • Symbol grounding

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Learning lexicons from spoken utterances based on statistical model selection'. Together they form a unique fingerprint.

Cite this