Dadbot1.7b

Dadbot1.7b is a QLoRA adapter for Qwen/Qwen3-1.7B trained to produce an original, family-friendly DadBot assistant voice: corny, food-motivated, lazy-but-loving, overconfident, warm, accidentally wise, and still useful.

DadBot is an original assistant persona. It is not an imitation of any existing copyrighted TV character. The training dataset was built synthetically and does not use scraped scripts, transcripts, episode text, copyrighted catchphrases, or exact character dialogue.

Model Details

  • Model type: PEFT LoRA adapter
  • Base model: Qwen/Qwen3-1.7B
  • Training method: QLoRA supervised fine-tuning
  • Dataset: clarkkitchen22/dadquotes5k
  • Language: English
  • Primary use: Conversational text generation with a warm, corny assistant persona
  • License: Apache 2.0

This repository contains adapter weights. Load it with the base model using PEFT.

Intended Use

Dadbot1.7b is intended for family-friendly conversational assistants, style-controlled text generation, synthetic instruction-tuning experiments, lightweight local assistant prototypes, and testing identity-boundary behavior for a fictional assistant persona.

It is not intended for impersonating copyrighted characters or reproducing copyrighted dialogue.

Training Data

The adapter was trained on dadquotes5k, a 5,000-example synthetic ChatML dataset.

  • Raw synthetic examples generated: 6,500
  • Validated examples: 6,500
  • Rejected examples: 0
  • Final accepted examples: 5,000
  • Training examples: 4,750
  • Validation examples: 250
  • Dataset review result: PASS
  • Duplicate examples found during validation: 0
  • Near duplicates found during validation: 0

The dataset covers everyday advice, technical explanation, coding help, debugging help, emotional support, school-safe jokes, family advice, chores, work motivation, food logic, basic finance, sports basics, refusal safety, identity boundaries, meta AI questions, motivational speeches, bedtime stories, and classroom-friendly guidance.

Training Procedure

Training was run on an NVIDIA GeForce RTX 5060 Ti with 16 GB VRAM.

  • Quantization: 4-bit QLoRA
  • Quantization type: NF4
  • LoRA rank: 16
  • LoRA alpha: 32
  • LoRA dropout: 0.05
  • Max sequence length: 2048
  • Per-device batch size: 1
  • Gradient accumulation steps: 16
  • Gradient checkpointing: enabled
  • Epochs: 2
  • Learning rate: 2e-4
  • LR scheduler: cosine
  • Precision: bf16
  • Final global steps: 594

LoRA target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj.

Evaluation

Final validation metrics on the 250-example validation split:

  • eval_loss: 0.1315404772758484
  • eval_mean_token_accuracy: 0.9533898718357087
  • eval_runtime: 25.1467 seconds
  • eval_samples_per_second: 9.942
  • epoch: 2.0

These metrics measure next-token prediction on the synthetic validation split. They should not be interpreted as broad real-world conversational quality or safety benchmarks.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = "Qwen/Qwen3-1.7B"
adapter = "clarkkitchen22/Dadbot1.7b"

tokenizer = AutoTokenizer.from_pretrained(adapter)
model = AutoModelForCausalLM.from_pretrained(
    base_model,
    device_map="auto",
    torch_dtype="auto",
)
model = PeftModel.from_pretrained(model, adapter)

messages = [
    {
        "role": "system",
        "content": "You are DadBot, an original cheesy sitcom/cartoon dad assistant. You are corny, food-motivated, lazy-but-loving, overconfident, warm, and accidentally wise. You never claim to be or quote any copyrighted TV character. You answer helpfully while staying in DadBot's voice."
    },
    {
        "role": "user",
        "content": "Help me stop procrastinating on a boring chore."
    }
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=180, temperature=0.7, top_p=0.9)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Safety and Limitations

  • The model was trained on synthetic data and may inherit generator artifacts.
  • The DadBot style is intentionally comedic and may not fit serious or high-stakes contexts.
  • The model can hallucinate or provide incorrect advice.
  • Basic finance, technical, coding, and emotional-support outputs should be reviewed by a qualified person when stakes are meaningful.
  • The model should not be prompted or deployed to imitate copyrighted characters or reproduce copyrighted dialogue.

Copyright and Originality

DadBot is an original assistant persona. The dataset and model were designed to avoid copyrighted character impersonation and copyrighted catchphrases. Identity-boundary examples teach refusal behavior when a user asks for impersonation.

Framework Versions

  • PEFT 0.19.1
  • TRL 1.3.0
  • Transformers 5.8.0
  • PyTorch 2.11.0
  • Datasets 4.8.5
  • Tokenizers 0.22.2

Citation

Dadbot1.7b, QLoRA adapter for Qwen/Qwen3-1.7B.
Dataset: dadquotes5k, synthetic ChatML instruction dataset.
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for clarkkitchen22/Dadbot1.7b

Finetuned
Qwen/Qwen3-1.7B
Adapter
(487)
this model

Dataset used to train clarkkitchen22/Dadbot1.7b

Evaluation results