Extracting representative subset from extensive text data for training pre-trained language models

Jun Suzuki, Heiga Zen, Hideto Kazawa

Research output: Contribution to journalArticlepeer-review

Abstract

This paper investigates the existence of a representative subset obtained from a large original dataset that can achieve the same performance level obtained using the entire dataset in the context of training neural language models. We employ the likelihood-based scoring method based on two distinct types of pre-trained language models to select a representative subset. We conduct our experiments on widely used 17 natural language processing datasets with 24 evaluation metrics. The experimental results showed that the representative subset obtained using the likelihood difference score can achieve the 90% performance level even when the size of the dataset is reduced to approximately two to three orders of magnitude smaller than the original dataset. We also compare the performance with the models trained with the same amount of subset selected randomly to show the effectiveness of the representative subset.

Original languageEnglish
Article number103249
JournalInformation Processing and Management
Volume60
Issue number3
DOIs
Publication statusPublished - 2023 May

Keywords

  • Data selection
  • Limited computational resource
  • Natural language processing
  • Neural language model
  • Pre-trained model
  • Representative subset

ASJC Scopus subject areas

  • Information Systems
  • Media Technology
  • Computer Science Applications
  • Management Science and Operations Research
  • Library and Information Sciences

Fingerprint

Dive into the research topics of 'Extracting representative subset from extensive text data for training pre-trained language models'. Together they form a unique fingerprint.

Cite this