The TREC Fair Ranking track evaluates systems according to how well they fairly rank documents. The 2019 task focuses on re-ranking academic abstracts given a query. The objective is to fairly represent relevant authors from several, undisclosed group definitions. These groups can be defined in a variety of ways and the track emphasizes the development of systems which have robust performance across a variety of group definitions.
We have released the track guidelines, including a description of the dataset, experimentation protocol, and evaluation metrics here. We are also releasing simulation code to generate query sequences similar to those you will receive in August.