Instructions to use N-Bot-Int/MistThena7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use N-Bot-Int/MistThena7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="N-Bot-Int/MistThena7B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("N-Bot-Int/MistThena7B") model = AutoModelForCausalLM.from_pretrained("N-Bot-Int/MistThena7B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use N-Bot-Int/MistThena7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "N-Bot-Int/MistThena7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "N-Bot-Int/MistThena7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/N-Bot-Int/MistThena7B
- SGLang
How to use N-Bot-Int/MistThena7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "N-Bot-Int/MistThena7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "N-Bot-Int/MistThena7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "N-Bot-Int/MistThena7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "N-Bot-Int/MistThena7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use N-Bot-Int/MistThena7B with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for N-Bot-Int/MistThena7B to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for N-Bot-Int/MistThena7B to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for N-Bot-Int/MistThena7B to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="N-Bot-Int/MistThena7B", max_seq_length=2048, ) - Docker Model Runner
How to use N-Bot-Int/MistThena7B with Docker Model Runner:
docker model run hf.co/N-Bot-Int/MistThena7B
Official Quants are Uploaded By Us
Wider Quant Supports are Uploaded By mradermacher!
- Thank you so much for the Help mradermacher!
- mrardermarcher's GGUF & Weight support
- mrardermarcher's GGUF & Weight support(i1)
MistThena7B - A.
MistThena7B is our brand New AI boasting with An Even Bigger 7B and Ditching Llama3.2 for Mistral for lightweight Finetuning And Fast Training and Output. MistThena7B is designed to Ditch its Outer-score and Prioritize Total Roleplaying, Trained with 5x More Dataset Compared to What We used At OpenElla3-Llama3.2B, Making this New Model Even More Competitive Against Hallucinations, and Even More Better Textual Generations And Uncensored Output
MistThena7B Model A Does not suffer the same Prompting issue with OpenElla3-Llama3.2B, however please use ChatML style Prompting For Better Experience, And Remember to be aware of bias with the training dataset used, The AI model is Under Apache 2.0 however WE ARE NOT RESPONSIBLE TO YOUR USAGE, PROMPTING, AND WAYS ABOUT HOW YOU USE THE MODEL. PLEASE BE GUIDED OWN ACCORDING/WILL
MistThena7B Model A Outperforms OpenElla Family Model, However please keep in mind the Parameter Difference. It Outperforms Testing Benchmarks In Roleplaying and Engaging with RP or Generation of Prompts, You are Free to release a Benchmark.
MistThena7B contains more Fine-tuned Dataset so please Report any issues found through our email nexus.networkinteractives@gmail.com about any overfitting, or improvements for the future Model B, Once again feel free to Modify the LORA to your likings, However please consider Adding this Page for credits and if you'll increase its Dataset, then please handle it with care and ethical considerations
MistThena is
- Developed by: N-Bot-Int
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.3-bnb-4bit
- Sequential Trained from Model: N-Bot-Int/OpenElla3-Llama3.2A
- Dataset Combined Using: Mosher-R1(Propietary Software)
-
Metrics Made By ItsMeDevRoland Which compares:
- Deepseek R1 3B GGUF
- Dolphin 3B GGUF
- Hermes 3b Llama GGUFF
- OpenElla3-Llama3.2B GGUFF Which are All Ranked with the Same Prompt, Same Temperature, Same Hardware(Google Colab), To Properly Showcase the differences and strength of the Models
THIS MODEL EXCELLS IN LONGER PROMPT AND STAYING IN CHARACTER BUT LAGS BEHIND DEEPSEEK-R1
THERE ARE YET TO BE RELEASED METRIC SCORE FOR THIS MODEL, PLEASE REMAIN PATIENT WHILST ItsMeDevRoland Released an Updated Report
Notice
- For a Good Experience, Please use
- Low temperature 1.5, min_p = 0.1 and max_new_tokens = 128
- For a Good Experience, Please use
Detail card:
Parameter
- 7 Billion Parameters
- (Please visit your GPU Vendor if you can Run 7B models)
Training
- 200 steps
- N-Bot-Int/Iris-Uncensored-R1
- 100
- N-Bot-Int/Iris-Uncensored-R1(Reinforcement Training)
- 100 steps
- M-Datasets
- 60 steps(DPO)
- Unalignment/Toxic-DPO
- 200 steps
Finetuning tool:
Unsloth AI
- This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

- This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
Fine-tuned Using:
Google Colab
- Downloads last month
- 8

