Instructions to use N-Bot-Int/MistThena7B-V2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use N-Bot-Int/MistThena7B-V2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="N-Bot-Int/MistThena7B-V2") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("N-Bot-Int/MistThena7B-V2", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use N-Bot-Int/MistThena7B-V2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "N-Bot-Int/MistThena7B-V2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "N-Bot-Int/MistThena7B-V2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/N-Bot-Int/MistThena7B-V2
- SGLang
How to use N-Bot-Int/MistThena7B-V2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "N-Bot-Int/MistThena7B-V2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "N-Bot-Int/MistThena7B-V2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "N-Bot-Int/MistThena7B-V2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "N-Bot-Int/MistThena7B-V2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use N-Bot-Int/MistThena7B-V2 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for N-Bot-Int/MistThena7B-V2 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for N-Bot-Int/MistThena7B-V2 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for N-Bot-Int/MistThena7B-V2 to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="N-Bot-Int/MistThena7B-V2", max_seq_length=2048, ) - Docker Model Runner
How to use N-Bot-Int/MistThena7B-V2 with Docker Model Runner:
docker model run hf.co/N-Bot-Int/MistThena7B-V2
- Official Quants are Uploaded By Us
- Support us on Ko-Fi!
- MistThena7B - V2.
- 🌀 MistThema-7B V2: Slower Beats, Stronger Bonds — A Roleplay Revival
- MistThema-7B V2 isn’t just an upgrade — she’s a reinvention. Built on V1’s storytelling roots, V2 shifts her focus inward:
- ⚙️ The Cost of Craft: Time for Thought
- 📏 Traditional Metrics? A Trade-off
- 🎯 Reimagined for Realness
- MistThema-7B V2 is where slower feels stronger.
Official Quants are Uploaded By Us
Support us on Ko-Fi!
MistThena7B - V2.
Introducing our Mindboggling MistThena7B V2, This Version Offer an Upgraded RP experience, beyond Other AI model We've made, Outcompetting our 3B, 1B, MythoMax, Deepseek and Hermes for Roleplaying!
MistThena7B-V2 Offer an expanded Roleplay capabilities, using our EmojiEmulsifyer Program to Train MistThena7B To Use Emojis, expanding the Roleplaying Immersiveness and Actionsets MistThena7B can do!
Activate MistThena's Expanded Actions, by mirroring it(ie using Emoji on your own prompts), to ensure MistThena's Use of Emoji or Actions!
MistThena7B-V2 is also trained on 160K Examples from our Latest Corpus IRIS_UNCENSORED_R2!, This shows that Iris_Uncensored_R2 shows huge potential to produce good outputs, and reveals MistThena7B's Good Roleplaying capabilities, Which were obtained through High and rigorous training, combined with Preventive measure to overfitting
MistThena7B contains more Fine-tuned Dataset so please Report any issues found through our email nexus.networkinteractives@gmail.com about any overfitting, or improvements for the future Model V3, Once again feel free to Modify the LORA to your likings, However please consider Adding this Page for credits and if you'll increase its Dataset, then please handle it with care and ethical considerations
MistThena is
- Developed by: N-Bot-Int
- License: apache-2.0
- Finetuned from model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
- Sequential Trained from Model: N-Bot-Int/OpenElla3-Llama3.2A
- Dataset Combined Using: Mosher-R1(Propietary Software)
-
- Metrics Made By ItsMeDevRoland
Which compares:
- MistThena7B-V1
- MistThena7B-V2 : 60 STEP VERSION Which are All Ranked with the Same Prompt, Same Temperature, Same Hardware(Google Colab), To Properly Showcase the differences and strength of the Models
- Metrics Made By ItsMeDevRoland
Which compares:
🌀 MistThema-7B V2: Slower Beats, Stronger Bonds — A Roleplay Revival
"She may not win the speed race, but when it comes to presence and performance — she owns the stage."
![]()
MistThema-7B V2 isn’t just an upgrade — she’s a reinvention. Built on V1’s storytelling roots, V2 shifts her focus inward:
- longer scenes, deeper characters, and dialogue that breathes.

- 💬 Roleplay Evaluation
- ✍️ Length Score: 0.34 → 1.00 (🚀)
- 🧠 Character Consistency: 0.20 → 0.53
- 🌌 Immersion: 0.00 → 0.47
- 🎭 Overall RP Score: 0.17 → 0.67
She no longer just responds — she inhabits. MistThema-7B V2 is the method actor of models, channeling roles with vivid coherence and creative depth.
⚙️ The Cost of Craft: Time for Thought
- 🕒 Inference Time: 114s → 179s (↑)
- ⚡ Tokens/sec: 1.51 → 1.28 (↓)
Yes, she’s slower — but that’s not a bug. That’s intention. Every word is more considered, every output more deliberate.
📏 Traditional Metrics? A Trade-off
- 📘 BLEU Score: 0.43 → 0.18
- 📕 ROUGE-L: 0.60 → 0.32
While V1 outperforms on surface-level matching, V2 is optimized for experiential fidelity, not rigid overlap.
🎯 Reimagined for Realness
MistThema-7B V2 isn’t trying to mimic — she’s trying to immerse. Designed to tell stories, embody roles, and hold character in long-form exchanges.
- 🧩 Tailored for:
- Narrative-heavy use cases
- Emotional continuity and consistency
- Richer, longer interactions
“MistThema-7B V2 trades benchmarks for believability. Less about matching — more about meaning. Less polished — more present.”
MistThema-7B V2 is where slower feels stronger.
Notice
Detail card:
Parameter
- 7 Billion Parameters
- (Please visit your GPU Vendor if you can Run 7B models)
Training
- 250 Steps
- N-Bot-Int/Iris_Uncensored_R2
- 60 Steps
- N-Bot-Int/Millie_DPO
- 250 Steps
Finetuning tool:
Unsloth AI
- This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

- This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
Fine-tuned Using:
Google Colab




