Text Generation
Transformers
Safetensors
English
llama
GRPO
Reinforcement learning
trl
SFT
conversational
text-generation-inference
Instructions to use prithivMLmods/Bellatrix-Tiny-1B-R1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use prithivMLmods/Bellatrix-Tiny-1B-R1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="prithivMLmods/Bellatrix-Tiny-1B-R1") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Bellatrix-Tiny-1B-R1") model = AutoModelForCausalLM.from_pretrained("prithivMLmods/Bellatrix-Tiny-1B-R1") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use prithivMLmods/Bellatrix-Tiny-1B-R1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "prithivMLmods/Bellatrix-Tiny-1B-R1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Bellatrix-Tiny-1B-R1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/prithivMLmods/Bellatrix-Tiny-1B-R1
- SGLang
How to use prithivMLmods/Bellatrix-Tiny-1B-R1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "prithivMLmods/Bellatrix-Tiny-1B-R1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Bellatrix-Tiny-1B-R1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "prithivMLmods/Bellatrix-Tiny-1B-R1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Bellatrix-Tiny-1B-R1", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use prithivMLmods/Bellatrix-Tiny-1B-R1 with Docker Model Runner:
docker model run hf.co/prithivMLmods/Bellatrix-Tiny-1B-R1
| license: llama3.2 | |
| language: | |
| - en | |
| base_model: | |
| - meta-llama/Llama-3.2-1B-Instruct | |
| pipeline_tag: text-generation | |
| library_name: transformers | |
| tags: | |
| - GRPO | |
| - Reinforcement learning | |
| - trl | |
| - SFT | |
| # **Bellatrix-Tiny-1B-R1** | |
| Bellatrix is based on a reasoning-based model designed for the DeepSeek-R1 synthetic dataset entries. The pipeline's instruction-tuned, text-only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. These models outperform many of the available open-source options. Bellatrix is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions utilize supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF). | |
| # **Use with transformers** | |
| Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. | |
| Make sure to update your transformers installation via `pip install --upgrade transformers`. | |
| ```python | |
| import torch | |
| from transformers import pipeline | |
| model_id = "prithivMLmods/Bellatrix-Tiny-1B-R1" | |
| pipe = pipeline( | |
| "text-generation", | |
| model=model_id, | |
| torch_dtype=torch.bfloat16, | |
| device_map="auto", | |
| ) | |
| messages = [ | |
| {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, | |
| {"role": "user", "content": "Who are you?"}, | |
| ] | |
| outputs = pipe( | |
| messages, | |
| max_new_tokens=256, | |
| ) | |
| print(outputs[0]["generated_text"][-1]) | |
| ``` | |
| Note: You can also find detailed recipes on how to use the model locally, with `torch.compile()`, assisted generations, quantised and more at [`huggingface-llama-recipes`](https://github.com/huggingface/huggingface-llama-recipes) | |
| # **Intended Use** | |
| Bellatrix is designed for applications that require advanced reasoning and multilingual dialogue capabilities. It is particularly suitable for: | |
| - **Agentic Retrieval**: Enabling intelligent retrieval of relevant information in a dialogue or query-response system. | |
| - **Summarization Tasks**: Condensing large bodies of text into concise summaries for easier comprehension. | |
| - **Multilingual Use Cases**: Supporting conversations in multiple languages with high accuracy and coherence. | |
| - **Instruction-Based Applications**: Following complex, context-aware instructions to generate precise outputs in a variety of scenarios. | |
| # **Limitations** | |
| Despite its capabilities, Bellatrix has some limitations: | |
| 1. **Domain Specificity**: While it performs well on general tasks, its performance may degrade with highly specialized or niche datasets. | |
| 2. **Dependence on Training Data**: It is only as good as the quality and diversity of its training data, which may lead to biases or inaccuracies. | |
| 3. **Computational Resources**: The model’s optimized transformer architecture can be resource-intensive, requiring significant computational power for fine-tuning and inference. | |
| 4. **Language Coverage**: While multilingual, some languages or dialects may have limited support or lower performance compared to widely used ones. | |
| 5. **Real-World Contexts**: It may struggle with understanding nuanced or ambiguous real-world scenarios not covered during training. |