Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations. Retrieval using sparse representations is provided via integration with our group's Anserini IR toolkit, which is built on Lucene. Retrieval using dense representations is provided via integration with Facebook's Faiss library.
Pyserini is primarily designed to provide effective, reproducible, and easy-to-use first-stage retrieval in a multi-stage ranking architecture. Our toolkit is self-contained as a standard Python package and comes with queries, relevance judgments, prebuilt indexes, and evaluation scripts for many commonly used IR test collections. With Pyserini, it's easy to reproduce runs on a number of standard IR test collections!
For additional details, our paper in SIGIR 2021 provides a nice overview.
β¨ New! Guide to working with the MS MARCO 2.1 Document Corpus for TREC 2024 RAG Track.
β Anserini was upgraded from JDK 11 to JDK 21 at commit 272565
(2024/04/03), which corresponds to the release of v0.35.0.
Correspondingly, Pyserini was upgraded to JDK 21 at commit b2f677
(2024/04/04).
Install via PyPI:
pip install pyserini
Pyserini is built on Python 3.10 (other versions might work, but YMMV) and Java 21 (due to its dependency on Anserini).
A pip
installation will automatically pull in major dependencies such as PyTorch, π€ Transformers, and the ONNX Runtime.
The toolkit also comes with "extras":
pip install 'pyserini[extras]'
Notably, faiss-cpu
, lightgbm
, and nmslib
are included in these "extras".
Installation of these packages can be temperamental, which is why they are not included in the core dependencies.
It might be a good idea to install these yourself separately.
The software ecosystem is rapidly evolving and a potential source of frustration is incompatibility among different versions of underlying dependencies. We provide additional detailed installation instructions here.
If you're planning on just using Pyserini, then the pip
instruction (without "extras") should be fine.
However, if you're planning on contributing to the codebase or want to work with the latest not-yet-released features, you'll need a development installation.
Instructions are provided here.
Pyserini supports different types of retrieval models. See this guide for details on how to search common corpora in IR and NLP research (e.g., MS MARCO, NaturalQuestions, BEIR, etc.) using indexes that we have already built for you. Here are direct links into the guide:
- Traditional lexical models (e.g., BM25) using Lucene.
- Learned sparse retrieval models (e.g., uniCOIL, SPLADE, etc.) using Lucene.
- Learned dense retrieval models (e.g., DPR, Contriever, BGE, etc.) using Lucene or Faiss.
- Hybrid retrieval models (e.g., dense-sparse fusion).
Once you get the top-k results, you'll actually want to fetch the document text... See this guide for how.
Well, it depends on what type of retrieval model you want to search with:
- Building a BM25 Index (Direct Java Implementation)
- Building a BM25 Index (Embeddable Python Implementation)
- Building a Sparse Vector Index
- Building a Dense Vector Index
The steps are different for different classes of models: this guide (same as the links above) describes the details.
- How do I configure search? (Guide to Interactive Search)
- How do I manually download indexes? (Guide to Interactive Search)
- How do I perform dense and hybrid retrieval? (Guide to Interactive Search)
- How do I iterate over index terms and access term statistics? (Index Reader API)
- How do I traverse postings? (Index Reader API)
- How do I access and manipulate term vectors? (Index Reader API)
- How do I compute the tf-idf or BM25 score of a document? (Index Reader API)
- How do I access basic index statistics? (Index Reader API)
- How do I access underlying Lucene analyzers? (Analyzer API)
- How do I build custom Lucene queries? (Query Builder API)
- How do I iterate over raw collections? (Collection API)
With Pyserini, it's easy to reproduce runs on a number of standard IR test collections! We provide a number of prebuilt indexes that directly support reproducibility "out of the box".
In our SIGIR 2022 paper, we introduced "two-click reproductions" that allow anyone to reproduce experimental runs with only two clicks (i.e., copy and paste). Documentation is organized into reproduction matrices for different corpora that provide a summary of different experimental conditions and query sets:
- MS MARCO V1 Passage
- MS MARCO V1 Document
- MS MARCO V2 Passage
- MS MARCO V2 Document
- BEIR
- Mr.TyDi
- MIRACL
- Open-Domain Question Answering
- CIRAL
For more details, see our paper on Building a Culture of Reproducibility in Academic Research.
Additional reproduction guides below provide detailed step-by-step instructions.
Sparse Retrieval
- Reproducing Robust04 baselines for ad hoc retrieval
- Reproducing the BM25 baseline for MS MARCO V1 Passage Ranking
- Reproducing the BM25 baseline for MS MARCO V1 Document Ranking
- Reproducing the multi-field BM25 baseline for MS MARCO V1 Document Ranking from Elasticsearch
- Reproducing BM25 baselines on the MS MARCO V2 Collections
- Reproducing LTR filtering experiments: MS MARCO V1 Passage, MS MARCO V1 Document
- Reproducing IRST experiments on the MS MARCO V1 Collections
- Reproducing DeepImpact: MS MARCO V1 Passage
- Reproducing uniCOIL with doc2query-T5: MS MARCO V1, MS MARCO V2
- Reproducing uniCOIL with TILDE: MS MARCO V1 Passage, MS MARCO V2 Passage
- Reproducing SPLADEv2: MS MARCO V1 Passage
- Reproducing Mr. TyDi experiments
- Reproducing BM25 baselines for HC4
- Reproducing BM25 baselines for HC4 on NeuCLIR22
- Reproducing SLIM experiments
- Baselines for KILT: a benchmark for Knowledge Intensive Language Tasks
- Baselines for TripClick: a large-scale dataset of click logs in the health domain
- Baselines (in Anserini) for the FEVER (Fact Extraction and VERification) dataset
Dense Retrieval
- Reproducing TCT-ColBERTv1 experiments: MS MARCO V1
- Reproducing TCT-ColBERTv2 experiments: MS MARCO V1, MS MARCO V2
- Reproducing DPR experiments
- Reproducing BPR experiments
- Reproducing ANCE experiments
- Reproducing DistilBERT KD experiments
- Reproducing DistilBERT Balanced Topic Aware Sampling experiments
- Reproducing SBERT dense retrieval experiments
- Reproducing ADORE dense retrieval experiments
- Reproducing Vector PRF experiments
- Reproducing ANCE-PRF experiments
- Reproducing Mr. TyDi experiments
- Reproducing DKRR experiments
Hybrid Sparse-Dense Retrieval
Available Corpora
Corpora | Size | Checksum |
---|---|---|
MS MARCO V1 passage: uniCOIL (noexp) | 2.7 GB | f17ddd8c7c00ff121c3c3b147d2e17d8 |
MS MARCO V1 passage: uniCOIL (d2q-T5) | 3.4 GB | 78eef752c78c8691f7d61600ceed306f |
MS MARCO V1 doc: uniCOIL (noexp) | 11 GB | 11b226e1cacd9c8ae0a660fd14cdd710 |
MS MARCO V1 doc: uniCOIL (d2q-T5) | 19 GB | 6a00e2c0c375cb1e52c83ae5ac377ebb |
MS MARCO V2 passage: uniCOIL (noexp) | 24 GB | d9cc1ed3049746e68a2c91bf90e5212d |
MS MARCO V2 passage: uniCOIL (d2q-T5) | 41 GB | 1949a00bfd5e1f1a230a04bbc1f01539 |
MS MARCO V2 doc: uniCOIL (noexp) | 55 GB | 97ba262c497164de1054f357caea0c63 |
MS MARCO V2 doc: uniCOIL (d2q-T5) | 72 GB | c5639748c2cbad0152e10b0ebde3b804 |
- Guide to prebuilt indexes
- Guide to interactive searching
- Guide to text classification with the 20Newsgroups dataset
- Guide to working with the COVID-19 Open Research Dataset (CORD-19)
- Guide to working with entity linking
- Guide to working with spaCy
- Usage of the Analyzer API
- Usage of the Index Reader API
- Usage of the Query Builder API
- Usage of the Collection API
- Direct Interaction via Pyjnius
- v0.42.0 (w/ Anserini v0.38.0): November 8, 2024 [Release Notes]
- v0.41.0 (w/ Anserini v0.38.0): November 7, 2024 [Release Notes] [Known Issues]
- v0.40.0 (w/ Anserini v0.38.0): October 28, 2024 [Release Notes]
- v0.39.0 (w/ Anserini v0.38.0): September 27, 2024 [Release Notes]
- v0.38.0 (w/ Anserini v0.38.0): September 11, 2024 [Release Notes]
- v0.37.0 (w/ Anserini v0.37.0): August 26, 2024 [Release Notes]
- v0.36.0 (w/ Anserini v0.36.1): June 17, 2024 [Release Notes]
- v0.35.0 (w/ Anserini v0.35.0): April 4, 2024 [Release Notes]
older... (and historic notes)
- v0.25.0 (w/ Anserini v0.25.0): March 31, 2024 [Release Notes]
- v0.24.0 (w/ Anserini v0.24.0): December 28, 2023 [Release Notes]
- v0.23.0 (w/ Anserini v0.23.0): November 17, 2023 [Release Notes]
- v0.22.1 (w/ Anserini v0.22.1): October 19, 2023 [Release Notes]
- v0.22.0 (w/ Anserini v0.22.0): August 31, 2023 [Release Notes]
- v0.21.0 (w/ Anserini v0.21.0): April 6, 2023 [Release Notes]
- v0.20.0 (w/ Anserini v0.20.0): February 1, 2023 [Release Notes]
- v0.19.2 (w/ Anserini v0.16.2): December 16, 2022 [Release Notes]
- v0.19.1 (w/ Anserini v0.16.1): November 12, 2022 [Release Notes]
- v0.19.0 (w/ Anserini v0.16.1): November 2, 2022 [Release Notes] [Known Issues]
- v0.18.0 (w/ Anserini v0.15.0): September 26, 2022 [Release Notes] (First release based on Lucene 9)
- v0.17.1 (w/ Anserini v0.14.4): August 13, 2022 [Release Notes] (Final release based on Lucene 8)
- v0.17.0 (w/ Anserini v0.14.3): May 28, 2022 [Release Notes]
- v0.16.1 (w/ Anserini v0.14.3): May 12, 2022 [Release Notes]
- v0.16.0 (w/ Anserini v0.14.1): March 1, 2022 [Release Notes]
- v0.15.0 (w/ Anserini v0.14.0): January 21, 2022 [Release Notes]
- v0.14.0 (w/ Anserini v0.13.5): November 8, 2021 [Release Notes]
- v0.13.0 (w/ Anserini v0.13.1): July 3, 2021 [Release Notes]
- v0.12.0 (w/ Anserini v0.12.0): May 5, 2021 [Release Notes]
- v0.11.0.0: February 18, 2021 [Release Notes]
- v0.10.1.0: January 8, 2021 [Release Notes]
- v0.10.0.1: December 2, 2020 [Release Notes]
- v0.10.0.0: November 26, 2020 [Release Notes]
- v0.9.4.0: June 26, 2020 [Release Notes]
- v0.9.3.1: June 11, 2020 [Release Notes]
- v0.9.3.0: May 27, 2020 [Release Notes]
- v0.9.2.0: May 15, 2020 [Release Notes]
- v0.9.1.0: May 6, 2020 [Release Notes]
- v0.9.0.0: April 18, 2020 [Release Notes]
- v0.8.1.0: March 22, 2020 [Release Notes]
- v0.8.0.0: March 12, 2020 [Release Notes]
- v0.7.2.0: January 25, 2020 [Release Notes]
- v0.7.1.0: January 9, 2020 [Release Notes]
- v0.7.0.0: December 13, 2019 [Release Notes]
- v0.6.0.0: November 2, 2019
More details:
- PyPI v0.17.1 (commit
33c87c
, released 2022/08/13) is the last Pyserini release built on Lucene 8, based on Anserini v0.14.4. Thereafter, Anserini trunk was upgraded to Lucene 9. - PyPI v0.18.0 (commit
5fab14
, released 2022/09/26) is built on Anserini v0.15.0, using Lucene 9. Thereafter, Pyserini trunk advanced to Lucene 9.
Explanations:
-
What's the impact? Indexes built with Lucene 8 are not fully compatible with Lucene 9 code (see Anserini #1952). The workaround is to disable consistent tie-breaking, which happens automatically if a Lucene 8 index is detected by Pyserini. However, Lucene 9 code running on Lucene 8 indexes will give slightly different results than Lucene 8 code running on Lucene 8 indexes. Note that Lucene 8 code is not able to read indexes built with Lucene 9.
-
Why is this necessary? Although disruptive, an upgrade to Lucene 9 is necessary to take advantage of Lucene's HNSW indexes, which will increase the capabilities of Pyserini and open up the design space of dense/sparse hybrids.
With v0.11.0.0 and before, Pyserini versions adopted the convention of X.Y.Z.W, where X.Y.Z tracks the version of Anserini, and W is used to distinguish different releases on the Python end. Starting with Anserini v0.12.0, Anserini and Pyserini versions have become decoupled.
Anserini is designed to work with JDK 11. There was a JRE path change above JDK 9 that breaks pyjnius 1.2.0, as documented in this issue, also reported in Anserini here and here. This issue was fixed with pyjnius 1.2.1 (released December 2019). The previous error was documented in this notebook and this notebook documents the fix.
If you use Pyserini, please cite the following paper:
@INPROCEEDINGS{Lin_etal_SIGIR2021_Pyserini,
author = "Jimmy Lin and Xueguang Ma and Sheng-Chieh Lin and Jheng-Hong Yang and Ronak Pradeep and Rodrigo Nogueira",
title = "{Pyserini}: A {Python} Toolkit for Reproducible Information Retrieval Research with Sparse and Dense Representations",
booktitle = "Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021)",
year = 2021,
pages = "2356--2362",
}
This research is primarily supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada.