Instructions to use Undi95/MistralThinker-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use Undi95/MistralThinker-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Undi95/MistralThinker-GGUF", filename="MistralThinker.q8_0.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Undi95/MistralThinker-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Undi95/MistralThinker-GGUF:Q8_0 # Run inference directly in the terminal: llama-cli -hf Undi95/MistralThinker-GGUF:Q8_0
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Undi95/MistralThinker-GGUF:Q8_0 # Run inference directly in the terminal: llama-cli -hf Undi95/MistralThinker-GGUF:Q8_0
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Undi95/MistralThinker-GGUF:Q8_0 # Run inference directly in the terminal: ./llama-cli -hf Undi95/MistralThinker-GGUF:Q8_0
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Undi95/MistralThinker-GGUF:Q8_0 # Run inference directly in the terminal: ./build/bin/llama-cli -hf Undi95/MistralThinker-GGUF:Q8_0
Use Docker
docker model run hf.co/Undi95/MistralThinker-GGUF:Q8_0
- LM Studio
- Jan
- Ollama
How to use Undi95/MistralThinker-GGUF with Ollama:
ollama run hf.co/Undi95/MistralThinker-GGUF:Q8_0
- Unsloth Studio new
How to use Undi95/MistralThinker-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Undi95/MistralThinker-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Undi95/MistralThinker-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Undi95/MistralThinker-GGUF to start chatting
- Docker Model Runner
How to use Undi95/MistralThinker-GGUF with Docker Model Runner:
docker model run hf.co/Undi95/MistralThinker-GGUF:Q8_0
- Lemonade
How to use Undi95/MistralThinker-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Undi95/MistralThinker-GGUF:Q8_0
Run and chat with the model
lemonade run user.MistralThinker-GGUF-Q8_0
List all available models
lemonade list
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
https://huggingface.co/Undi95/MistralThinker-GGUF/discussions/1
Those repo are public because I hit the private storage limit, but feel free to try. This model use the Mistral V7 prompt format.
It was trained on DeepSeek R1 RP log and character card, and some funny shit.
Default system prompt: "You are MistralThinker, a Large Language Model (LLM) created by Undi.\nYour knowledge base was last updated on 2023-10-01. Current date: {date}.\n\nWhen unsure, state you don't know."
I recommand you putting information about the persona and yourself in the system prompt to let the magic happen.
I sadly have a problem with the prompt format, in the tokenizer_config.json
I try to recreate what DeepSeek have done with their distill : they added <think> at the beginning of each assistant reply and cut off the thinking part in the context.
I did the same, but on my side, the first <think> don't appear using "Chat completion".
Other than that, the model seem fully functionnal, feel free to try, but be sure to prefill <think> one way or another.
Here's an exemple where the character card contain You're roleplaying as a hot 35 years old motherly MILF, and a custom system prompt.
- Downloads last month
- 19
8-bit
