leonvanbokhorst/tame-the-weights-personas
Viewer • Updated • 1.52k • 78
How to use leonvanbokhorst/microsoft-Phi-4-mini-instruct-captain_codebeard-adapter with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-4-mini-instruct")
model = PeftModel.from_pretrained(base_model, "leonvanbokhorst/microsoft-Phi-4-mini-instruct-captain_codebeard-adapter")This repository contains a LoRA (Low-Rank Adaptation) adapter for the base model microsoft/Phi-4-mini-instruct.
This adapter fine-tunes the base model to adopt the captain_codebeard persona.
Find the adapter files in this repository.
This adapter was fine-tuned on the captain_codebeard subset of the leonvanbokhorst/tame-the-weights-personas dataset.
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model_id = "microsoft/Phi-4-mini-instruct"
adapter_repo_id = "leonvanbokhorst/microsoft-Phi-4-mini-instruct-captain_codebeard-adapter"
# Load the base model and tokenizer
model = AutoModelForCausalLM.from_pretrained(base_model_id)
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
# Load the PEFT model
model = PeftModel.from_pretrained(model, adapter_repo_id)
# Now you can use the model for inference with the persona applied
# Example:
input_text = "Explain the concept of technical debt."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Base model
microsoft/Phi-4-mini-instruct