phi-3.5-AI-Vtuber-json : GGUF

This is a fine-tuned large language model based on Phi-3.5 Mini-Instruct, optimized for AI companion applications that require strict, machine-readable JSON output.

This was trained to always return responses in a consistent JSON format with response and emotion fields. This makes it easy to integrate with software that parses and uses AI outputs programmatically.

๐Ÿ”— Project Nova, where i have used this model: https://github.com/Navjot-Singh7/Project-Nova


Model Overview

  • Base Model: Phi-3.5 Mini-Instruct

  • Fine-Tuned For: AI companion behavior with structured JSON output

Output Format:

{
  "response": "...",
  "emotion": "..."
}

Primary Use Case: AI companion systems and applications where responses must be machine-readable.


Capabilities

  • This model has been fine-tuned to:

    • Generate companion-style text that is appropriate, engaging, and in JSON format.

    • Always include both:

      • response: the AIโ€™s text output

      • emotion: a tag describing the emotional tone of the response

Produce outputs that are consistent and reliable for code integration.


Intended Use

Primary Use Cases

  • AI companion applications
  • Virtual characters or avatars
  • VTuber or assistant personalities -Applications that require structured LLM output
  • Emotion-aware conversational systems

Training Details

Custom Dataset - I created my own labeled dataset with 10โ€“20 original samples with JSON-style examples.

Synthetic Data Generation - Then I expanded this initial dataset using another language model to create a synthetic training corpus of ~1,800 samples.

Fine-Tuning Environment - Training was performed using Google Colab.

Dataset Composition - The dataset contains structured examples that guide the model to generate JSON output with response and emotion.


Usage Example

Below is an example of how the model might respond in your application:

{
  "response": "Hello! I'm fine thank you... uhm.. did you have a good day?", 
  "emotion": "happy"
}

This makes it easy to parse and handle both the semantic content (response) and the emotional context (emotion) in code.


License

This model is licensed under the MIT License. You are free to use, modify, and distribute this model for personal or educational purposes.

Available Model files:

  • phi-3.5-mini-instruct.Q4_K_M.gguf

Ollama

An Ollama Modelfile is included for easy deployment. This was trained 2x faster with Unsloth

Downloads last month
87
GGUF
Model size
4B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Navpy/phi-3.5-AI-Vtuber-json

Quantized
(172)
this model