Gemma-3-4B-IT-Uncensored-v2
This repository contains Quantized versions of Gemma 3 4B IT Uncensored v2, an instruction-tuned 4B parameter language model designed for users who want a highly responsive, minimally restricted assistant suitable for local, offline, or private deployments. The model is optimized for direct interaction, reasoning, creative tasks, and experimentation, while preserving the efficiency and accessibility of a smaller parameter count.
Model Overview
- Model Name: Gemma_3_4B_IT_Uncensored_v2
- Base Architecture: Gemma 3 (4B parameters)
- License: Inherits the license terms of the original Gemma 3 model
- Intended Use: Local or private deployments where users want greater control over alignment, filtering behavior, and conversational tone
What Is Gemma 3 4B IT Uncensored v2?
Gemma-3-4B-IT-Uncensored-v2 is a lightly-aligned, instruction-following model focused on:
- User-directed alignment
- Reduced artificial guardrails
- High responsiveness and clarity
- Strong reasoning and step-by-step task handling
- Efficient inference on consumer hardware
This version (v2) refines response quality, instruction adherence, and conversational stability compared to earlier releases, making it suitable for both casual and advanced users.
Chat Template & Conversation Format
The model follows a Gemma-style instruction format, typically structured as:
<start_of_turn>user
Your prompt here
<end_of_turn>
<start_of_turn>model
Using the correct chat template is strongly recommended for optimal instruction-following and response quality.
Key Features & Capabilities
- Instruction-tuned for clear, concise, and user-aligned responses
- Uncensored behavioral tuning for research and experimentation
- Effective at conversational, creative, and reasoning tasks
- Supports multi-step reasoning and structured answers
- Optimized for local inference (CPU and GPU friendly)
- Stable output across longer conversations
- Suitable for alignment research and prompt engineering
Intended Use Cases
- Local assistants โ personal chatbots, productivity tools, role-play systems
- Coding support โ explanations, examples, lightweight debugging
- Reasoning tasks โ logical breakdowns, step-by-step problem solving
- Creative writing โ stories, dialogue, brainstorming
- Experimentation โ uncensored model behavior, alignment testing
- Offline / private use โ scenarios requiring data locality and user control
Hardware & Performance Notes
With only 4B parameters, this model is well-suited for:
- Consumer GPUs
- Quantized CPU inference
- Embedded or low-resource environments
It offers a strong balance between performance, responsiveness, and efficiency.
Disclaimer
This model is uncensored and designed for research, experimentation, and user-controlled environments. Outputs may include content that would normally be filtered in more restrictive models. Users are responsible for ensuring compliance with applicable laws, policies, and ethical guidelines when deploying or using this model.
Acknowledgements
Special thanks to:
- The creators and maintainers of the Gemma 3 architecture
- The open-source community supporting training, fine-tuning, quantization, and deployment tools
- Downloads last month
- 3,764
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for Andycurrent/gemma-3-4b-it-uncensored-v2-GGUF
Base model
braindao/gemma-3-4b-it-uncensored-v2