Hyze Logo

HyzeMini V2

The next-gen SLM by Hyze AI

πŸ”— Chat with all models β€’ πŸ“˜ HyzeAcademy β€’ 🧠 HyzeNote (NotebookLM alternate)


πŸš€ Overview

HyzeMini V2 is a compact and efficient text-generation transformer model optimized for General Chat πŸ’¬.

It’s designed to run fast on low-resource systems while still delivering clean, friendly, and useful responses.

  • Model type: Transformer-based LLM
  • Parameters: 400M
  • Precision: BF16
  • Language: English
  • License: Apache-2.0

🧠 Training Focus

HyzeMini V2 was trained on a curated mixture of Open-Source English datasets, with emphasis on:

  • πŸ’¬ General Chat
    • Casual conversation
    • Q&A-style prompts
    • Friendly assistant tone

πŸ”¬ About the Founder

HyzeAI was created by Hitesh Vinothkumar who is 12 years old. HyzeLabs focuses on learning, experimentation, and open access, blending software engineering with curiosity about the universe


πŸ“Š Benchmarks (Qualitative Comparison)

HyzeMini V2 focuses on speed, coherence, and domain knowledge rather than raw reasoning power.

Model Size Strengths Tradeoffs
HyzeMini V2 ~0.4B General focused, fast, chat-friendly Limited deep reasoning
TinyLlama ~0.1B Solid general generation More generic responses
GPT-Neo 125M ~0.125B Better general reasoning Slower, higher memory
GPT-1 ~0.117B Historical baseline Less coherent by modern standards
Downloads last month
428
Safetensors
Model size
0.4B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for HyzeAI/HyzeMini-V2-400M

Quantizations
9 models