Area: Applications

As the field of NLP advances, more NLP approaches find their way into practical applications. Yet while NLP has been successful in helping humans to find and extract information, only few applications help us to consume, evaluate and aggregate information from interconnected, changing texts. The focus of this area is to develop novel applications that assist humans in solving complex real-life text-based tasks - from editorial support to machine-assisted reading via NLP-enhanced annotation.

Peer review is a cornerstone of academic quality control, yet the pressure to publish causes significant reviewing overload in many key scientific fields, jeopardizing scientific progress and undermining public trust in science. While NLP applications for analysis of scientific publications are abundant, the field of peer review analysis is just taking up the pace. Peer review is an excellent target for cross-document discourse analysis, and this area puts special focus on developing NLP applications for peer reviewing assistance.

Publications

Jul 2024

Systematic Task Exploration with LLMs: A Study in Citation Text Generation
Furkan Şahinuç, Ilia Kuznetsov, Yufang Hou, Iryna Gurevych (2024)
ACL-2024 [paper] [repo]
[bibTex] [plain]

Jun 2024

⤴️ Missci: Reconstructing Fallacies in Misrepresented Science
Max Glockner, Yufang Hou, Preslav Nakov, Iryna Gurevych (2024)
ACL-2024 [paper] [repo]
[bibTex] [plain]

May 2024

What Can Natural Language Processing Do for Peer Review?
Ilia Kuznetsov, Osama Mohammed Afzal, Koen Dercksen, Nils Dycke, Alexander Goldberg, Tom Hope, Dirk Hovy, Jonathan K. Kummerfeld, Anne Lauscher, Kevin Leyton-Brown, Sheng Lu, Mausam, Margot Mieskes, Aurélie Névéol, Danish Pruthi, Lizhen Qu... [+8] (2024)
🔥 arXiv [paper] [repo]
[bibTex] [plain]

Dec 2023

⤴️ Exploring Jiu-Jitsu Argumentation for Writing Peer Review Rebuttals
Sukannya Purkayastha, Anne Lauscher, Iryna Gurevych (2023)
EMNLP-2023 [paper] [repo]
[bibTex] [plain]

Dec 2023

Overview of PragTag-2023: Low-Resource Multi-Domain Pragmatic Tagging of Peer Reviews
Nils Dycke, Ilia Kuznetsov, Iryna Gurevych (2023)
Proceedings of the 10th Workshop on Argument Mining [paper] [repo]
[bibTex] [plain]

Jul 2023

CARE: Collaborative AI-Assisted Reading Environment
Dennis Zyska, Nils Dycke, Jan Buchmann, Ilia Kuznetsov, Iryna Gurevych (2023)
ACL-2023 [paper] [repo]
[bibTex] [plain]

Jul 2023

NLPeer: A Unified Resource for the Computational Study of Peer Review
Nils Dycke, Ilia Kuznetsov, Iryna Gurevych (2023)
ACL-2023 [paper] [repo]
[bibTex] [plain]

May 2022

Assisting Decision Making in Scholarly Peer Review: A Preference Learning Perspective
Nils Dycke, Edwin Simpson, Ilia Kuznetsov, Iryna Gurevych (2022)
arXiv [paper]
[bibTex] [plain]

Nov 2019

Does My Rebuttal Matter? Insights from a Major NLP Conference
Yang Gao, Steffen Eger, Ilia Kuznetsov, Iryna Gurevych, Yusuke Miyao (2019)
NAACL [paper] [repo]
[bibTex] [plain]

Datasets and Code

ACL-2018 Review Corpus
A corpus of anonymised structured peer reviews collected during the ACL-2018 reviewing campaign. ACL-2018 employed a rich reviewing schema, with each review containing a wide range of textual, binary, ternary and numerical fields, including Strengths, Weaknesses, Summary, aspect scores, overall score and confidence scores. While openly publishing the textual data is not possible due to the ethical concerns, we make numerical data publicly available to support meta-scientific study of peer reviewing in the NLP community.
CARE Source
The source code for CARE: our new open-source Collaborative AI-Assisted Reading Environment. Explore the extensive documentation and try the public demo!
NLPeer
An openly-licensed, unified, multi-domain resource for the computational study of peer review. Papers, reviews and paper revisions in a unified format across a range of research communities, incl. new data from ACL and COLING review collection campaigns.