Click here to switch to the current Fair Ranking Track webpage.
The TREC Fair Ranking track evaluates systems according to how well they fairly rank documents. The 2019 task focuses on re-ranking academic abstracts given a query. The objective is to fairly represent relevant authors from several, undisclosed group definitions. These groups can be defined in a variety of ways and the track emphasizes the development of systems which have robust performance across a variety of group definitions.
GUIDELINES
We have released the track guidelines, including a description of the dataset, experimentation protocol, and evaluation metrics here. We are also releasing simulation code to generate query sequences similar to those you will receive in August.
Citations to this specific dataset should use the following citation,
@inproceedings{trec-fair-ranking-2019,
Author = {Asia J. Biega and Fernando Diaz and Michael D. Ekstrand and Sebastian Kohlmeier},
Booktitle = {The Twenty-Eighth Text REtrieval Conference (TREC 2019) Proceedings},
Title = {Overview of the TREC 2019 Fair Ranking Track},
Year = {2019}}