Instructions to use pankajmathur/orca_mini_v4_8b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use pankajmathur/orca_mini_v4_8b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="pankajmathur/orca_mini_v4_8b") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("pankajmathur/orca_mini_v4_8b") model = AutoModelForCausalLM.from_pretrained("pankajmathur/orca_mini_v4_8b") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use pankajmathur/orca_mini_v4_8b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "pankajmathur/orca_mini_v4_8b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pankajmathur/orca_mini_v4_8b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/pankajmathur/orca_mini_v4_8b
- SGLang
How to use pankajmathur/orca_mini_v4_8b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "pankajmathur/orca_mini_v4_8b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pankajmathur/orca_mini_v4_8b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "pankajmathur/orca_mini_v4_8b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pankajmathur/orca_mini_v4_8b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use pankajmathur/orca_mini_v4_8b with Docker Model Runner:
docker model run hf.co/pankajmathur/orca_mini_v4_8b
YAML Metadata Warning:The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
Model Name: llama_3_orca_mini_v4_8b
Llama-3-8b base model trained on Orca Style Mini Datasets
"Obsessed with GenAI's potential? So am I ! Let's create together 🚀 https://www.linkedin.com/in/pankajam"
NOTICE
By providing proper credit and attribution, you are granted permission to use this model as a foundational base for further DPO/PPO tuning or Merges. I actively encourage users to customize and enhance the model according to their specific needs, as this version is designed to be a comprehensive, fully fine-tuned general model. Dive in and innovate!
Evaluation
We evaluated this model on a wide range of tasks using Language Model Evaluation Harness from EleutherAI.
Here are the results on similar metrics used by HuggingFaceH4 Open LLM Leaderboard
| Metric | Value |
|---|---|
| Avg. | 66.65 |
| AI2 Reasoning Challenge (25-Shot) | 58.02 |
| HellaSwag (10-Shot) | 81.65 |
| MMLU (5-Shot) | 63.23 |
| TruthfulQA (0-shot) | 55.78 |
| Winogrande (5-shot) | 73.95 |
| GSM8k (5-shot) | 67.25 |
Example Usage
Here is the ChatML prompt format
<|im_start|>system
You are Orca Mini, a helpful AI assistant.<|im_end|>
<|im_start|>user
Hello Orca Mini, what can you do for me?<|im_end|>
<|im_start|>assistant
Below shows a code example on how to use this model
from transformers import AutoModel, AutoTokenizer
model_slug = "pankajmathur/orca_mini_v4_8b"
model = AutoModel.from_pretrained(model_slug)
tokenizer = AutoTokenizer.from_pretrained(model_slug)
messages = [
{"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
{"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
This model is governed by META LLAMA 3 COMMUNITY LICENSE AGREEMENT
Quants
GGUF : Coming Soon
AWQ: Coming Soon
- Downloads last month
- 9