Area: Applications

As the field of NLP advances, more NLP approaches find their way into practical applications. Yet while NLP has been successful in helping humans to find and extract information, only few applications help us to consume, evaluate and aggregate information from interconnected, changing texts. The focus of this area is to develop novel applications that assist humans in solving complex real-life text-based tasks - from editorial support to machine-assisted reading via NLP-enhanced annotation.

Peer review is a cornerstone of academic quality control, yet the pressure to publish causes significant reviewing overload in many key scientific fields, jeopardizing scientific progress and undermining public trust in science. While NLP applications for analysis of scientific publications are abundant, the field of peer review analysis is just taking up the pace. Peer review is an excellent target for cross-document discourse analysis, and this area puts special focus on developing NLP applications for peer reviewing assistance.


Nov 2023

Exploring Jiu-Jitsu Argumentation for Writing Peer Review Rebuttals ⤴️
Sukannya Purkayastha, Anne Lauscher, Iryna Gurevych (2023)
🔥 To appear at EMNLP-2023 [paper]
[bibTex] [plain]

Jul 2023

NLPeer: A Unified Resource for the Computational Study of Peer Review
Nils Dycke, Ilia Kuznetsov, Iryna Gurevych (2023)
ACL-2023 [paper] [repo]
[bibTex] [plain]

Jul 2023

CARE: Collaborative AI-Assisted Reading Environment
Dennis Zyska, Nils Dycke, Jan Buchmann, Ilia Kuznetsov, Iryna Gurevych (2023)
ACL-2023 [paper] [repo]
[bibTex] [plain]

May 2022

Assisting Decision Making in Scholarly Peer Review: A Preference Learning Perspective
Nils Dycke, Edwin Simpson, Ilia Kuznetsov, Iryna Gurevych (2022)
🔥 arXiv [paper]
[bibTex] [plain]

Nov 2019

Does My Rebuttal Matter? Insights from a Major NLP Conference
Yang Gao, Steffen Eger, Ilia Kuznetsov, Iryna Gurevych, Yusuke Miyao (2019)
NAACL [paper] [repo]
[bibTex] [plain]

Datasets and Code

ACL-2018 Review Corpus
A corpus of anonymised structured peer reviews collected during the ACL-2018 reviewing campaign. ACL-2018 employed a rich reviewing schema, with each review containing a wide range of textual, binary, ternary and numerical fields, including Strengths, Weaknesses, Summary, aspect scores, overall score and confidence scores. While openly publishing the textual data is not possible due to the ethical concerns, we make numerical data publicly available to support meta-scientific study of peer reviewing in the NLP community.
CARE Source
The source code for CARE: our new open-source Collaborative AI-Assisted Reading Environment. Explore the extensive documentation and try the public demo!
An openly-licensed, unified, multi-domain resource for the computational study of peer review. Papers, reviews and paper revisions in a unified format across a range of research communities, incl. new data from ACL and COLING review collection campaigns.