TY - GEN
T1 - Exploring Methodologies for Collecting High-Quality Implicit Reasoning in Arguments
AU - Singh, Keshav
AU - Mim, Farjana Sultana
AU - Inoue, Naoya
AU - Naito, Shoichi
AU - Inui, Kentaro
N1 - Funding Information:
This work was partially supported by JST CREST Grant Number JPMJCR20D2 and NEDO Grant Number J200001946. The authors would like to thank Paul Reisert, other members of the Tohoku NLP Lab, and the anonymous reviewers for their insightful feedback.
Publisher Copyright:
© 2021 Association for Computational Linguistics.
PY - 2021
Y1 - 2021
N2 - Annotation of implicit reasoning (i.e., warrant) in arguments is a critical resource to train models in gaining deeper understanding and correct interpretation of arguments. However, warrants are usually annotated in unstructured form, having no restriction on their lexical structure which sometimes makes it difficult to interpret how warrants relate to any of the information given in claim and premise. Moreover, assessing and determining better warrants from the large variety of reasoning patterns of unstructured warrants becomes a formidable task. Therefore, in order to annotate warrants in a more interpretative and restrictive way, we propose two methodologies to annotate warrants in a semi-structured form. To the best of our knowledge, we are the first to show how such semi-structured warrants can be annotated on a large scale via crowdsourcing. We demonstrate through extensive quality evaluation that our methodologies enable collecting better quality warrants in comparison to unstructured annotations. To further facilitate research towards the task of explicating warrants in arguments, we release our materials publicly (i.e., crowdsourcing guidelines and collected warrants).
AB - Annotation of implicit reasoning (i.e., warrant) in arguments is a critical resource to train models in gaining deeper understanding and correct interpretation of arguments. However, warrants are usually annotated in unstructured form, having no restriction on their lexical structure which sometimes makes it difficult to interpret how warrants relate to any of the information given in claim and premise. Moreover, assessing and determining better warrants from the large variety of reasoning patterns of unstructured warrants becomes a formidable task. Therefore, in order to annotate warrants in a more interpretative and restrictive way, we propose two methodologies to annotate warrants in a semi-structured form. To the best of our knowledge, we are the first to show how such semi-structured warrants can be annotated on a large scale via crowdsourcing. We demonstrate through extensive quality evaluation that our methodologies enable collecting better quality warrants in comparison to unstructured annotations. To further facilitate research towards the task of explicating warrants in arguments, we release our materials publicly (i.e., crowdsourcing guidelines and collected warrants).
UR - http://www.scopus.com/inward/record.url?scp=85127175213&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85127175213&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85127175213
T3 - 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings
SP - 57
EP - 66
BT - 8th Workshop on Argument Mining, ArgMining 2021 - Proceedings
PB - Association for Computational Linguistics (ACL)
T2 - 8th Workshop on Argument Mining, ArgMining 2021
Y2 - 10 November 2021 through 11 November 2021
ER -