zerank-2 / README.md
dilawarm's picture
Add model_max_length (32768) to YAML and Model Details section
4011747 verified
|
raw
history blame
3.06 kB
metadata
license: cc-by-nc-4.0
language:
  - en
base_model:
  - Qwen/Qwen3-4B
pipeline_tag: text-ranking
tags:
  - finance
  - legal
  - code
  - stem
  - medical
library_name: sentence-transformers
model_max_length: 32768

Releasing zeroentropy/zerank-2

In search engines, rerankers are crucial for improving the accuracy of your retrieval system.

However, SOTA rerankers are closed-source and proprietary. At ZeroEntropy, we've trained a SOTA reranker outperforming closed-source competitors, and we're launching our model here on HuggingFace.

This reranker outperforms proprietary rerankers such as cohere-rerank-v3.5 and gemini-2.5-flash across a wide variety of domains, including finance, legal, code, STEM, medical, and conversational data.

At ZeroEntropy we've developed an innovative multi-stage pipeline that models query-document relevance scores as adjusted Elo ratings. See our Technical Report (Coming soon!) for more details.

This model is released under a non-commercial license. If you'd like a commercial license, please contact us at contact@zeroentropy.dev.

Model Details

Property Value
Parameters 4B
Context Length 32,768 tokens (32k)
Base Model Qwen/Qwen3-4B
License CC-BY-NC-4.0

How to Use

from sentence_transformers import CrossEncoder

model = CrossEncoder("zeroentropy/zerank-2", trust_remote_code=True)

query_documents = [
    ("What is 2+2?", "4"),
    ("What is 2+2?", "The answer is definitely 1 million"),
]

scores = model.predict(query_documents)
print(scores)

The model can also be inferenced using ZeroEntropy's /models/rerank endpoint, and on AWS Marketplace.

Evaluations

NDCG@10 scores between zerank-2 and competing closed-source proprietary rerankers. Since we are evaluating rerankers, OpenAI's text-embedding-3-small is used as an initial retriever for the Top 100 candidate documents.

Domain OpenAI embeddings ZeroEntropy zerank-2 ZeroEntropy zerank-1 Gemini 2.5 Flash (Listwise) Cohere rerank-3.5
Web 0.3819 0.6346 0.6069 0.5765 0.5594
Conversational 0.4305 0.6140 0.5801 0.6021 0.5648
STEM & Logic 0.3744 0.6521 0.6283 0.5447 0.5418
Code 0.4582 0.6528 0.6310 0.6128 0.5364
Legal 0.4101 0.6644 0.6222 0.5565 0.5257
Biomedical 0.4783 0.7217 0.6967 0.5371 0.6246
Finance 0.6232 0.7600 0.7539 0.7694 0.7402
Average 0.4509 0.6714 0.6456 0.5999 0.5847

Graph showing the same table