Instructions to use N-Bot-Int/MiniMaid-L3 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use N-Bot-Int/MiniMaid-L3 with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit") model = PeftModel.from_pretrained(base_model, "N-Bot-Int/MiniMaid-L3") - Transformers
How to use N-Bot-Int/MiniMaid-L3 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="N-Bot-Int/MiniMaid-L3") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("N-Bot-Int/MiniMaid-L3", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use N-Bot-Int/MiniMaid-L3 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "N-Bot-Int/MiniMaid-L3" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "N-Bot-Int/MiniMaid-L3", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/N-Bot-Int/MiniMaid-L3
- SGLang
How to use N-Bot-Int/MiniMaid-L3 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "N-Bot-Int/MiniMaid-L3" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "N-Bot-Int/MiniMaid-L3", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "N-Bot-Int/MiniMaid-L3" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "N-Bot-Int/MiniMaid-L3", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use N-Bot-Int/MiniMaid-L3 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for N-Bot-Int/MiniMaid-L3 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for N-Bot-Int/MiniMaid-L3 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for N-Bot-Int/MiniMaid-L3 to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="N-Bot-Int/MiniMaid-L3", max_seq_length=2048, ) - Docker Model Runner
How to use N-Bot-Int/MiniMaid-L3 with Docker Model Runner:
docker model run hf.co/N-Bot-Int/MiniMaid-L3
- THIS IS THE FINAL MiniMaid-L Series, This is because we've hit the final Ceiling for a 1B model! Thank you so much for your Support!
- MiniMaid-L3
- MiniMaid-L1 Base-Model Card Procedure:
- 🧵 MiniMaid-L3: Slower Steps, Deeper Stories — The Immersive Upgrade
- MiniMaid-L3 doesn’t just iterate — she elevates. Built on L2’s disciplined architecture, L3 doubles down on character immersion and emotional coherence, refining every line she delivers.
- 📊 Slower, But Smarter
- 🎯 Refined Roleplay, Recalibrated Goals
- MiniMaid-L3 is the slow burn that brings the fire.
THIS IS THE FINAL MiniMaid-L Series, This is because we've hit the final Ceiling for a 1B model! Thank you so much for your Support!
MiniMaid-L3
Introducing MiniMaid-L3 model! Our brand new finetuned MiniMaid-L2 Architecture, allowing for an Even More Coherent and Immersive Roleplay through the Use of Knowledge distillation!
MiniMaid-L3 is a Small Update to L2, Which uses Knowledge distillation to combine our L2 Architecture, and A Popular Roleplaying Model named MythoMax, which also uses a Combanant Technology to Combine models and create MythoMax-7B, MiniMaid-L3 on the other hand is a distillation of MiniMaid-L2, combined with using MythoMax Knowledge Distillation, which created MiniMaid-L3, a More Capable Model that Outcompete its descendance in both roleplaying scenarios And even Knock MiniMaid-L2's BLEU scoring!
MiniMaid-L1 Base-Model Card Procedure:
MiniMaid-L1 achieve a good Performance through process of DPO and Combined Heavy Finetuning, To Prevent Overfitting, We used high LR decays, And Introduced Randomization techniques to prevent the AI from learning and memorizing, However since training this on Google Colab is difficult, the Model might underperform or underfit on specific tasks Or overfit on knowledge it manage to latched on! However please be guided that we did our best, and it will improve as we move onwards!
MiniMaid-L3 is Another Instance of Our Smallest Model Yet! if you find any issue, then please don't hesitate to email us at: nexus.networkinteractives@gmail.com about any overfitting, or improvements for the future Model V4, Once again feel free to Modify the LORA to your likings, However please consider Adding this Page for credits and if you'll increase its Dataset, then please handle it with care and ethical considerations
MiniMaid-L3 is
- Developed by: N-Bot-Int
- License: apache-2.0
- Parent Model from model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-1bit
- Dataset Combined Using: NKDProtoc(Propietary Software)
MiniMaid-L3 Official Metric Score

Metrics Made By ItsMeDevRoland Which compares:
- MiniMaid-L2 GGUFF
- MiniMaid-L3 GGUFF Which are All Ranked with the Same Prompt, Same Temperature, Same Hardware(Google Colab), To Properly Showcase the differences and strength of the Models
Visit Below to See details!
🧵 MiniMaid-L3: Slower Steps, Deeper Stories — The Immersive Upgrade
"She’s more grounded, more convincing — and when it comes to roleplay, she’s in a league of her own."
MiniMaid-L3 doesn’t just iterate — she elevates. Built on L2’s disciplined architecture, L3 doubles down on character immersion and emotional coherence, refining every line she delivers.
- 💬 Roleplay Evaluation (v2)
- 🧠 Character Consistency: 0.54 → 0.55 (+)
- 🌊 Immersion: 0.59 → 0.66 (↑)
- 🎭 Overall RP Score: 0.72 → 0.75
L3’s immersive depth marks a new high in believability and emotional traction — she's not just playing a part, she becomes it.
📊 Slower, But Smarter
- 🕒 Inference Time: 39.1s (↑ from 34.5s)
- ⚡ Tokens/sec: 6.61 (slight dip)
- 📏 BLEU/ROUGE-L: Mixed — slight BLEU gain, ROUGE-L softened
Sure, she takes her time — but it’s worth it. L3 trades a few milliseconds for measured, thoughtful outputs that stick the landing every time.
🎯 Refined Roleplay, Recalibrated Goals
- MiniMaid-L3 isn’t trying to be the fastest. She’s here to be real — holding character, deepening immersion, and generating stories that linger.
- 🛠️ Designed For:
- Narrative-focused deployments
- Long-form interaction and memory retention
- Low-size, high-fidelity simulation
“MiniMaid-L3 sacrifices a bit of speed to speak with soul. She’s no longer just reacting — she’s inhabiting. It’s not about talking faster — it’s about meaning more.”
MiniMaid-L3 is the slow burn that brings the fire.
Notice
- For a Good Experience, Please use
- Low temperature 1.5, min_p = 0.1 and max_new_tokens = 128
- For a Good Experience, Please use
Detail card:
Parameter
- 1 Billion Parameters
- (Please visit your GPU Vendor if you can Run 1B models)
Finetuning tool:
Unsloth AI
- This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

- This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
Fine-tuned Using:
Google Colab
- Downloads last month
- 14

