An end-to-end neural ad-hoc ranking pipeline.
This project is maintained by Georgetown-IR-Lab
You can configure and update the neural ranking arechitecture using rankers.
config/cedr/Implementation of CEDR for the DRMM model described in: Sean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian. 2019. CEDR: Contextualized Embeddings for Document Ranking. In SIGIR. link Should be used with a model first trained using Vanilla BERT.
config/conv_krnmImplementation of the ConvKNRM model from: Zhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu. 2018. Convolutional Neural Networks for Soft-Matching N-Grams in Ad-hoc Search. In WSDM. link
ranker=drmmImplementation of the DRMM model from: Jiafeng Guo, Yixing Fan, Qingyao Ai, and William Bruce Croft. 2016. A Deep Relevance Matching Model for Ad-hoc Retrieval. In CIKM. link
ranker=duetlImplementation of the local variant of the Duet model from: Bhaskar Mitra, Fernando Diaz, and Nick Craswell. 2016. Learning to Match using Local and Distributed Representations of Text for Web Search. In WWW. link
ranker=knrmImplementation of the K-NRM model from: Chenyan Xiong, Zhuyun Dai, James P. Callan, Zhiyuan Liu, and Russell Power. 2017. End-to-End Neural Ad-hoc Ranking with Kernel Pooling. In SIGIR. link
ranker=matchpyramidImplementation of the MatchPyramid model for ranking from: Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, and Xueqi Cheng. 2016. A Study of MatchPyramid Models on Ad-hoc Retrieval. In NeuIR @ SIGIR. link
ranker=pacrrImplementation of the PACRR model from: Kai Hui, Andrew Yates, Klaus Berberich, and Gerard de Melo. 2017. PACRR: A Position-Aware Neural IR Model for Relevance Matching. In EMNLP. link
Some features included from CO-PACRR (e.g., shuf): Kai Hui, Andrew Yates, Klaus Berberich, and Gerard de Melo. 2018. Co-PACRR: A Context-Aware Neural IR Model for Ad-hoc Retrieval. In WSDM. link
config/trivialTrivial ranker, which just returns the initial ranking score. Used for comparisions against neural ranking approaches.
Options allow the score to be inverted (neg), the individual query term scores to be summed by the ranker itself (qsum), and to use the manual relevance assessment instead of the run score, representing an optimal re-ranker (max).
config/vanilla_bertImplementation of the Vanilla BERT model from: Sean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian. 2019. CEDR: Contextualized Embeddings for Document Ranking. In SIGIR. link Should be used with a transformer vocab, e.g., BertVocab.
Implementations of rankers from the MatchZoo-py library.
ranker=mz_knrmMatchZoo implementation of the K-NRM model from: Chenyan Xiong, Zhuyun Dai, James P. Callan, Zhiyuan Liu, and Russell Power. 2017. End-to-End Neural Ad-hoc Ranking with Kernel Pooling. In SIGIR. link
mz_conv_knrmMatchZoo implementation of the ConvKNRM model from: Zhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu. 2018. Convolutional Neural Networks for Soft-Matching N-Grams in Ad-hoc Search. In WSDM. link