Text Ranking
Transformers
Safetensors
sentence-transformers
Chinese
English
minicpm
text-classification
custom_code
Instructions to use openbmb/MiniCPM-Reranker-Light with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openbmb/MiniCPM-Reranker-Light with Transformers:
# Load model directly from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("openbmb/MiniCPM-Reranker-Light", trust_remote_code=True, dtype="auto") - sentence-transformers
How to use openbmb/MiniCPM-Reranker-Light with sentence-transformers:
from sentence_transformers import CrossEncoder model = CrossEncoder("openbmb/MiniCPM-Reranker-Light", trust_remote_code=True) query = "Which planet is known as the Red Planet?" passages = [ "Venus is often called Earth's twin because of its similar size and proximity.", "Mars, known for its reddish appearance, is often referred to as the Red Planet.", "Jupiter, the largest planet in our solar system, has a prominent red spot.", "Saturn, famous for its rings, is sometimes mistaken for the Red Planet." ] scores = model.predict([(query, passage) for passage in passages]) print(scores) - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -31,7 +31,7 @@ UltraRAG-Reranker 基于 [MiniCPM-1B-sft-bf16](https://huggingface.co/openbmb/Mi
|
|
| 31 |
|
| 32 |
UltraRAG-Reranker is trained based on [MiniCPM-1B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-1B-sft-bf16) and incorporates bidirectional attention in its architecture. The model underwent multi-stage training using approximately 6 million training examples, including open-source, synthetic, and proprietary data.
|
| 33 |
|
| 34 |
-
We also invite you to explore the
|
| 35 |
|
| 36 |
- Retrieval Model: [UltraRAG-Embedding](https://huggingface.co/openbmb/UltraRAG-Embedding)
|
| 37 |
- Re-ranking Model: [UltraRAG-Reranker](https://huggingface.co/openbmb/UltraRAG-Reranker)
|
|
|
|
| 31 |
|
| 32 |
UltraRAG-Reranker is trained based on [MiniCPM-1B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-1B-sft-bf16) and incorporates bidirectional attention in its architecture. The model underwent multi-stage training using approximately 6 million training examples, including open-source, synthetic, and proprietary data.
|
| 33 |
|
| 34 |
+
We also invite you to explore the UltraRAG series:
|
| 35 |
|
| 36 |
- Retrieval Model: [UltraRAG-Embedding](https://huggingface.co/openbmb/UltraRAG-Embedding)
|
| 37 |
- Re-ranking Model: [UltraRAG-Reranker](https://huggingface.co/openbmb/UltraRAG-Reranker)
|