TREC 2019 Fair Ranking Track

Click here to switch to the current Fair Ranking Track webpage.


The TREC Fair Ranking track evaluates systems according to how well they fairly rank documents. The 2019 task focuses on re-ranking academic abstracts given a query. The objective is to fairly represent relevant authors from several, undisclosed group definitions. These groups can be defined in a variety of ways and the track emphasizes the development of systems which have robust performance across a variety of group definitions.

GUIDELINES

We have released the track guidelines, including a description of the dataset, experimentation protocol, and evaluation metrics here. We are also releasing simulation code to generate query sequences similar to those you will receive in August.

DATA

The corpus for this project is the Semantic Scholar (S2) Open Corpus from the Allen Institute for Artificial Intelligence, consisting of 47 1GB data files. We have an associated list of ~600 queries and relevance estimates that we plan on releasing the first week of June. For the authors appearing in candidate set, we have example group assignments. Data formats are described in the guidelines document.

TIMELINE

DOWNLOADS

RESOURCES

CITATION

Citations to this specific dataset should use the following citation,
@inproceedings{trec-fair-ranking-2019,
	Author = {Asia J. Biega and Fernando Diaz and Michael D. Ekstrand and Sebastian Kohlmeier},
	Booktitle = {The Twenty-Eighth Text REtrieval Conference (TREC 2019) Proceedings},
	Title = {Overview of the TREC 2019 Fair Ranking Track},
	Year = {2019}}

Organizers

Headshot of Asia Biega
Asia Biega
Microsoft Research
Montréal
Headshot of Fernando Diaz
Fernando Diaz
Microsoft Research
Montréal
Headshot of Michael Ekstrand
Michael Ekstrand
People and Information Research Team (PIReT)
Boise State University
Headshot of Sebastian Kohlmeier
Sebastian Kohlmeier
Semantic Scholar
Allen Institute for Artificial Intelligence