Built with Axolotl

See axolotl config

axolotl version: 0.14.0.dev0

base_model: microsoft/Phi-4-mini-instruct
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

# 1. Dataset Configuration
datasets:
  - path: DannyAI/African-History-QA-Dataset
    split: train
    type: alpaca_chat.load_qa
    system_prompt: "You are a helpful AI assistant specialised in African history which gives concise answers to questions asked"
test_datasets:
  - path: DannyAI/African-History-QA-Dataset
    split: validation
    type: alpaca_chat.load_qa
    # Fixed the missing quote and indentation below
    system_prompt: "You are a helpful AI assistant specialised in African history which gives concise answers to questions asked"

# 2. Output & Chat Configuration
output_dir: ./phi4_african_history_lora_out
chat_template: tokenizer_default
train_on_inputs: false

# 3. Batch Size Configuration
micro_batch_size: 2
gradient_accumulation_steps: 4

# 4. LoRA Configuration
adapter: lora
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules: [q_proj, v_proj, k_proj, o_proj]

# 5. Hardware & Efficiency
sequence_len: 2048
sample_packing: true
eval_sample_packing: false 
pad_to_sequence_len: true
bf16: true
fp16: false

# 6. Training Duration & Optimizer
max_steps: 650  
# removed
# num_epochs: 
warmup_steps: 20
learning_rate: 0.00002
optimizer: adamw_torch 
lr_scheduler: cosine

# 7. Logging & Evaluation
wandb_project: phi4_african_history
wandb_name: phi4_lora_axolotl

eval_strategy: steps
eval_steps: 50
save_strategy: steps
save_steps: 100
logging_steps: 5

# 8. Public Hugging Face Hub Upload
hub_model_id: DannyAI/phi4_lora_axolotl
push_adapter_to_hub: true
hub_private_repo: false

Model Card for Model ID

This is a LoRA fine-tuned version of microsoft/Phi-4-mini-instruct for African History using the DannyAI/African-History-QA-Dataset dataset. It achieves a loss value of 1.7479 on the validation set

Model Details

Model Description

  • Developed by: Daniel Ihenacho
  • Funded by: Daniel Ihenacho
  • Shared by: Daniel Ihenacho
  • Model type: Text Generation
  • Language(s) (NLP): English
  • License: mit
  • Finetuned from model: microsoft/Phi-4-mini-instruct

Uses

This can be used for QA datasets about African History

Out-of-Scope Use

Can be used beyond African History but should not.

How to Get Started with the Model

from transformers import pipeline
from transformers import (
    AutoTokenizer, 
    AutoModelForCausalLM)
from peft import PeftModel


model_id = "microsoft/Phi-4-mini-instruct"

tokeniser = AutoTokenizer.from_pretrained(model_id)

# load base model
model  = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map = "auto",
    torch_dtype = torch.bfloat16,
    trust_remote_code = False
)

# Load the fine-tuned LoRA model
lora_id = "DannyAI/phi4_lora_axolotl"
lora_model = PeftModel.from_pretrained(
    model,lora_id
)

generator = pipeline(
    "text-generation",
    model=lora_model,
    tokenizer=tokeniser,
)
question = "What is the significance of African feminist scholarly activism in contemporary resistance movements?"
def generate_answer(question)->str:
    """Generates an answer for the given question using the fine-tuned LoRA model.
    """
    messages = [
        {"role": "system", "content": "You are a helpful AI assistant specialised in African history which gives concise answers to questions asked."},
        {"role": "user", "content": question}
    ]
    
    output = generator(
        messages, 
        max_new_tokens=2048, 
        temperature=0.1, 
        do_sample=False,
        return_full_text=False
    )
    return output[0]['generated_text'].strip()
# Example output
African feminist scholarly activism is significant in contemporary resistance movements as it provides a critical framework for understanding and addressing the specific challenges faced by African women in the context of global capitalism, neocolonialism, and patriarchal structures.

Training Details

Training results

Training Loss Epoch Step Validation Loss Ppl Active (gib) Allocated (gib) Reserved (gib)
No log 0 0 2.1184 8.3175 14.82 14.82 15.37
5.394 3.8627 50 2.1004 8.1694 14.84 14.84 31.82
4.4484 7.7059 100 2.0367 7.6652 14.84 14.84 31.84
3.7583 11.5490 150 1.9785 7.2316 14.84 14.84 31.84
3.363 15.3922 200 1.9299 6.8886 14.84 14.84 31.84
3.0568 19.2353 250 1.8664 6.4652 14.84 14.84 31.84
2.8736 23.0784 300 1.8134 6.1314 14.84 14.84 31.79
2.7646 26.9412 350 1.7851 5.9604 14.84 14.84 31.79
2.6891 30.7843 400 1.7668 5.8523 14.84 14.84 31.79
2.6843 34.6275 450 1.7581 5.8014 14.84 14.84 31.79
2.6048 38.4706 500 1.7534 5.7739 14.84 14.84 31.79
2.6118 42.3137 550 1.7505 5.7573 14.84 14.84 31.79
2.6024 46.1569 600 1.7503 5.7565 14.84 14.84 31.79
2.5727 50.0 650 1.7479 5.7428 14.84 14.84 31.79

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 20
  • training_steps: 650

Lora Configuration

  • r: 8
  • lora_alpha: 16
  • target_modules: ["q_proj", "v_proj", "k_proj", "o_proj"]
  • lora_dropout: 0.05 # dataset is small, hence a low dropout value
  • bias: "none"
  • task_type: "CAUSAL_LM"

Evaluation

Metrics

Models Bert Score TinyMMLU TinyTrufulQA
Base model 0.88868 0.6837 0.49745
Fine tuned Model 0.88981 0.67371 0.46626

Compute Infrastructure

Runpod.

Hardware

Runpod A40 GPU instance

Framework versions

  • PEFT 0.18.1
  • Transformers 4.57.6
  • Pytorch 2.9.1+cu128
  • Datasets 4.5.0
  • Tokenizers 0.22.2

Citation

If you use this dataset, please cite:

@Model{
Ihenacho2026phi4_lora_axolotl,
  author    = {Daniel Ihenacho},
  title     = {phi4_lora_axolotl},
  year      = {2026},
  publisher = {Hugging Face Models},
  url       = {https://huggingface.co/DannyAI/phi4_lora_axolotl},
  urldate   = {2026-01-27},
}

Model Card Authors

Daniel Ihenacho

Model Card Contact

Downloads last month
12
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DannyAI/phi4_lora_axolotl

Adapter
(163)
this model

Dataset used to train DannyAI/phi4_lora_axolotl