Corruption Is Not All Bad: Incorporating Discourse Structure into Pre-Training via Corruption for Essay Scoring

Farjana Sultana Mim, Naoya Inoue, Paul Reisert, Hiroki Ouchi, Kentaro Inui

Research output: Contribution to journalArticlepeer-review

Abstract

Existing approaches for automated essay scoring and document representation learning typically rely on discourse parsers to incorporate discourse structure into text representation. However, the performance of parsers is not always adequate, especially when they are used on noisy texts, such as student essays. In this paper, we propose an unsupervised pre-training approach to capture discourse structure of essays in terms of coherence and cohesion that does not require any discourse parser or annotation. We introduce several types of token, sentence and paragraph-level corruption techniques for our proposed pre-training approach and augment masked language modeling pre-training with our pre-training method to leverage both contextualized and discourse information. Our proposed unsupervised approach achieves a new state-of-the-art result on the task of essay Organization scoring.

Original languageEnglish
Article number9451631
Pages (from-to)2202-2215
Number of pages14
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
Volume29
DOIs
Publication statusPublished - 2021

Keywords

  • Automated Essay Scoring
  • Coherence
  • Cohesion
  • Corruption
  • Discourse
  • Natural Language Processing
  • Pre-training
  • Unsupervised Learning

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Acoustics and Ultrasonics
  • Computational Mathematics
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Corruption Is Not All Bad: Incorporating Discourse Structure into Pre-Training via Corruption for Essay Scoring'. Together they form a unique fingerprint.

Cite this