Instructions to use N-Bot-Int/KrizNore4-E2B-v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use N-Bot-Int/KrizNore4-E2B-v1 with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("N-Bot-Int/KrizNore4-E2B-v1", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Unsloth Studio new
How to use N-Bot-Int/KrizNore4-E2B-v1 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for N-Bot-Int/KrizNore4-E2B-v1 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for N-Bot-Int/KrizNore4-E2B-v1 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for N-Bot-Int/KrizNore4-E2B-v1 to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="N-Bot-Int/KrizNore4-E2B-v1", max_seq_length=2048, )
LORA ADAPTER OF N-Bot-Int/KrizNore4-E2B-v1
(THIS IS ONLY USED FOR FINETUNING, USE mradermacher/KrizNore4-E2B-v1-merged-GGUF FOR THE GGUF!)
- Developed by: N-Bot-Int
- License: agpl-3.0
- Finetuned from model : p-e-w/gemma-4-E2B-it-heretic-ara
This Gemma4 model was trained 2x faster with Unsloth
FEEL FREE TO FINETUNE THE AI MODEL, HOWEVER PLEASE READ THE DISCLAIMER HERE! (LEGACY, ADAPTERS SET AS APACHE2.0)
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for N-Bot-Int/KrizNore4-E2B-v1
Base model
p-e-w/gemma-4-E2B-it-heretic-ara
