Reducing odd generation from neural headline generation

Shun Kiyono, Sho Takase, Jun Suzuki, Naoaki Okazaki, Kentaro Inui, Masaaki Nagata

Research output: Contribution to conferencePaperpeer-review

Abstract

The Encoder-Decoder model is widely used innatural language generation tasks. However,the model sometimes suffers from repeated redundant generation, misses important phrases,and includes irrelevant entities. Toward solving these problems we propose a novel sourceside token prediction module. Our methodjointly estimates the probability distributionsover source and target vocabularies to capturethe correspondence between source and targettokens. Experiments show that the proposedmodel outperforms the current state-of-the-artmethod in the headline generation task. Wealso show that our method can learn a reasonable token-wise correspondence without knowing any true alignment1.

Original languageEnglish
Pages289-303
Number of pages15
Publication statusPublished - 2018
Event32nd Pacific Asia Conference on Language, Information and Computation, PACLIC 2018 - Hong Kong, Hong Kong
Duration: 2018 Dec 12018 Dec 3

Conference

Conference32nd Pacific Asia Conference on Language, Information and Computation, PACLIC 2018
Country/TerritoryHong Kong
CityHong Kong
Period18/12/118/12/3

ASJC Scopus subject areas

  • Language and Linguistics
  • Computer Science (miscellaneous)

Fingerprint

Dive into the research topics of 'Reducing odd generation from neural headline generation'. Together they form a unique fingerprint.

Cite this