TREC 2019 Fair Ranking Track
The TREC Fair Ranking track evaluates systems according to how well they
fairly rank documents. The 2019 task focuses on re-ranking academic abstracts given a query. The objective is to fairly represent relevant authors from several, undisclosed group definitions. These groups can be defined in a variety of ways and the track emphasizes the development of systems which have robust performance across a variety of group definitions.
We have released the track guidelines, including a description of the dataset, experimentation protocol, and evaluation metrics
here. We are also releasing simulation code to generate query sequences similar to those you will receive in August.
The corpus for this project is the
Semantic Scholar (S2) Open Corpus from the Allen Institute for Artificial Intelligence, consisting of 47 1GB data files. We have an associated list of ~600 queries and relevance estimates that we plan on releasing the first week of June. For the authors appearing in candidate set, we have example group assignments. Data formats are described in the guidelines document.
June 3, 2019: guidelines, data released; system development begins
August 13, 2019: evaluation query sequences released
August 16, 2019: submission site open
September 2, 2019: submissions due
late September, 2019: evaluated submissions returned
TREC Fair Ranking evaluation package: scripts and data (updated October 28, 2019). Note: by downloading the data sample contained in this package, you agree to the . Semantic Scholar Dataset License Agreement
TREC Fair Ranking Track Participant Instructions: guidelines, experimentation protocol, and data format description (updated August 13, 2019).
query-sequence-generator.py: Python script to generate query sequences (updated June 3, 2019).
fair-TREC-training-sample.json: sample relevance estimates.
note: by downloading this sample, you agree to the . (updated July 2, 2019). Semantic Scholar Dataset License Agreement
fair-TREC-sample-author-groups.csv: sample author group membership (updated June 3, 2019).
fair-TREC-evaluation-sample.json: evaluation queries and documents to be re-ranked. note: by downloading this sample, you agree to the . (updated August 13, 2019). AI2 terms of service
fair-TREC-evaluation-sequences.csv: evaluation query sequences. (updated August 13, 2019).
validate-run.py: submission validation script and example submission. (updated August 16, 2019).
Microsoft Research Montréal
Microsoft Research Montréal
People and Information Research Team (PIReT)
Boise State University
Allen Institute for Artificial Intelligence
Copyright ⓒ FAIR TREC organizers. The contents of this page have not been reviewed or approved by
Boise State University, Microsoft, or NIST, and do not represent the positions or opinions of