TY - GEN
T1 - Automatic Analysis of Substantiation in Scientific Peer Reviews
AU - Guo, Yanzhu
AU - Shang, Guokan
AU - Rennard, Virgile
AU - Vazirgiannis, Michalis
AU - Clavel, Chloé
N1 - Publisher Copyright:
© 2023 Association for Computational Linguistics.
PY - 2023/1/1
Y1 - 2023/1/1
N2 - With the increasing amount of problematic peer reviews in top AI conferences, the community is urgently in need of automatic quality control measures. In this paper, we restrict our attention to substantiation ' one popular quality aspect indicating whether the claims in a review are sufficiently supported by evidence ' and provide a solution automatizing this evaluation process. To achieve this goal, we first formulate the problem as claim-evidence pair extraction in scientific peer reviews, and collect SubstanReview, the first annotated dataset for this task. SubstanReview consists of 550 reviews from NLP conferences annotated by domain experts. On the basis of this dataset, we train an argument mining system to automatically analyze the level of substantiation in peer reviews. We also perform data analysis on the SubstanReview dataset to obtain meaningful insights on peer reviewing quality in NLP conferences over recent years. The dataset is available at https://github.com/YanzhuGuo/SubstanReview.
AB - With the increasing amount of problematic peer reviews in top AI conferences, the community is urgently in need of automatic quality control measures. In this paper, we restrict our attention to substantiation ' one popular quality aspect indicating whether the claims in a review are sufficiently supported by evidence ' and provide a solution automatizing this evaluation process. To achieve this goal, we first formulate the problem as claim-evidence pair extraction in scientific peer reviews, and collect SubstanReview, the first annotated dataset for this task. SubstanReview consists of 550 reviews from NLP conferences annotated by domain experts. On the basis of this dataset, we train an argument mining system to automatically analyze the level of substantiation in peer reviews. We also perform data analysis on the SubstanReview dataset to obtain meaningful insights on peer reviewing quality in NLP conferences over recent years. The dataset is available at https://github.com/YanzhuGuo/SubstanReview.
UR - https://www.scopus.com/pages/publications/85183292002
U2 - 10.18653/v1/2023.findings-emnlp.684
DO - 10.18653/v1/2023.findings-emnlp.684
M3 - Conference contribution
AN - SCOPUS:85183292002
T3 - Findings of the Association for Computational Linguistics: EMNLP 2023
SP - 10198
EP - 10216
BT - Findings of the Association for Computational Linguistics
PB - Association for Computational Linguistics (ACL)
T2 - 2023 Findings of the Association for Computational Linguistics: EMNLP 2023
Y2 - 6 December 2023 through 10 December 2023
ER -