llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)Mistral-24B - GGUF
This model was finetuned and converted to GGUF format using Unsloth.
Example usage:
- For text only LLMs: llama-cli --hf repo_id/model_name -p "why is the sky blue?"
- For multimodal models: llama-mtmd-cli -m model_name.gguf --mmproj mmproj_file.gguf
Available Model files:
mistral-small-24b-instruct-2501.Q5_K_M.ggufmistral-small-24b-instruct-2501.Q4_K_M.ggufmistral-small-24b-instruct-2501.Q4_0.gguf
Ollama
An Ollama Modelfile is included for easy deployment.
- Downloads last month
- 95
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="AlSamCur123/Mistral-24B", filename="", )