
Data and Benchmarking
Research-ready datasets and benchmarks for interconnected living texts.
Learn moreNews
Three papers at NAACL. Intertextuality has many applications, and we are excited to share three InterText-related papers by our colleagues at UKP Lab, to appear at NAACL-2025! Jonathan Tonglet et al. address the problem of debunking misinformation in images. Max Glockner et al. investigate misrepresentation of scientific claims. And Tim Baumgärtner et al. explore question answering for academic peer reviews, winning an Outstanding Paper Award 🏆! Have a look at the preprints, and if you are at NAACL, visit their talks to learn more.
Keynote at SIGIR-25. We are happy to announce that Iryna Gurevych will give a Keynote on the use of AI for science and expert-AI collaboration at SIGIR-2025. If you are at the conference, come to our talk to learn more about what InterText has been up to in the past months, and about other related initiatives at UKP Lab.
🚀 NLPeer v.2 has arrived. After many months of hard work, we are happy to announce the release of the NLPeer v.2. corpus – a new iteration of the data collection initiative from ACL Rolling Review, ELIFE, PLOS and other venues. With over 1.8k papers, 1k reviews, 1k rebuttals and 480 meta-reviews, this is one of the largest, most complete and most diverse peer reviewing datasets to date. Learn more about the project here, or simply download the dataset and start experimenting!
How do experts reason during peer review?... This question is crucial for successful expert-AI collaboration. In the new paper, we rethink peer review as a diagnostic reasoning process. We propose Natural Language Diagnostic Abductive Reasoning as a new family of text-based reasoning tasks, where experts analyze a text step by step to arrive at a verdict. Our unique dataset of over 4000k reasoning steps opens new frontiers in the study of expert-AI collaboration. Have a look at the preprint to learn more!
Are LLMs good classifiers?... To find out, we propose a framework to study LLM fine-tuning for classification with generation- and encoding-based approaches. We apply it to the edit intent classification task and create Re3-Sci2.0: a new large-scale dataset of scientific document revisions with over 94k labeled edits. Have a look at the preprint, while we prepare the camera-ready for EMNLP!
InterText at ACL-2024. Two InterText papers to appear at ACL-2024 in Bangkok! Qian Ruan will present our new dataset and approach for holistic modelling of document revision [1], and Furkan Şahinuç will talk about systematic exploration of creative multi-document NLG tasks in the age of LLMs [2]. While the authors are busy preparing their posters, take a look at the preprints and meet us at the conference!
Introducing M2QA. Language and domain are two major sources of data variation in NLP, motivating the need for joint language-domain transfer. Yet, reliable evaluation remains a challenge. To address this gap, together with colleagues, we created M2QA - a new multi-domain multi-lingual QA benchmark that allows testing for domain and/or language transfer across 4 distinct languages and domains. Find out the details in our preprint, or get the data and start experimenting!
New white paper on NLP for peer review. Peer review is at the core of modern science. Yet it is hard, time consuming and often unfair. What makes peer review challenging, how can NLP help, and where should it stand aside? A new, extensive white paper in collaboration with over 20 high-profile NLP and ML researchers lays the foundation for machine-assisted scientific quality control in the age of AI. The companion repository aggregates datasets for peer review assistance to help new researchers get started. Have a look and contribute!
InterText at EACL-2024. Long documents are often structured, making it much easier for humans to navigate large texts. Is document structure encoded in long-document transformers, and how can their structure-awareness be improved? We investigate this with a novel probing suite and structure infusion kit in our new EACL paper.
Related work from our colleagues. Peer review is one of the core objects of study in InterText. A closely related new work from our colleagues at UKP Lab and the University of Hamburg explores argumentation in peer reviews and rebuttals. Take a look at their pre-print and visit their talk at the upcoming EMNLP!
Team

Iryna Gurevych
Principal Investigator

Ilia Kuznetsov
Postdoc

Jan Buchmann
PhD Student

Nils Dycke
PhD Student

Qian Ruan
PhD Student

Dennis Zyska
PhD Student

Sheng Lu
PhD Student

Yiwei Wang
PhD Student

Serwar Basch
PhD Student
Funding


