| --- |
| viewer: false |
| tags: |
| - uv-script |
| - unsloth |
| - training |
| - hf-jobs |
| - vlm |
| - fine-tuning |
| --- |
| |
| # 🦥 Unsloth Training Scripts for HF Jobs |
|
|
| UV scripts for fine-tuning LLMs and VLMs using [Unsloth](https://github.com/unslothai/unsloth) on [HF Jobs](https://huggingface.co/docs/hub/jobs) (on-demand cloud GPUs). UV handles dependency installation automatically, so you can run these scripts directly without any local setup. |
|
|
| These scripts can also be used or adapted by agents to train models for you. |
|
|
| ## Prerequisites |
|
|
| - A Hugging Face account |
| - The [HF CLI](https://huggingface.co/docs/huggingface_hub/main/en/guides/cli) installed and authenticated (`hf auth login`) |
| - A dataset on the Hub in the appropriate format (see format requirements below). A strong LLM agent can often convert your data into the right format if needed. |
|
|
| ## Data Formats |
|
|
| ### LLM Fine-tuning (SFT) |
|
|
| Requires conversation data in ShareGPT or similar format: |
|
|
| ```python |
| { |
| "messages": [ |
| {"from": "human", "value": "What is the capital of France?"}, |
| {"from": "gpt", "value": "The capital of France is Paris."} |
| ] |
| } |
| ``` |
|
|
| The script auto-converts common formats (ShareGPT, Alpaca, etc.) via `standardize_data_formats`. See [mlabonne/FineTome-100k](https://huggingface.co/datasets/mlabonne/FineTome-100k) for a working dataset example. |
|
|
| ### VLM Fine-tuning |
|
|
| Requires `images` and `messages` columns: |
|
|
| ```python |
| { |
| "images": [<PIL.Image>], # List of images |
| "messages": [ |
| { |
| "role": "user", |
| "content": [ |
| {"type": "image"}, |
| {"type": "text", "text": "What's in this image?"} |
| ] |
| }, |
| { |
| "role": "assistant", |
| "content": [ |
| {"type": "text", "text": "A golden retriever playing fetch in a park."} |
| ] |
| } |
| ] |
| } |
| ``` |
|
|
| See [davanstrien/iconclass-vlm-sft](https://huggingface.co/datasets/davanstrien/iconclass-vlm-sft) for a working dataset example, and [davanstrien/iconclass-vlm-qwen3-best](https://huggingface.co/davanstrien/iconclass-vlm-qwen3-best) for a model trained with these scripts. |
|
|
| ### Continued Pretraining |
|
|
| Any dataset with a text column: |
|
|
| ```python |
| {"text": "Your domain-specific text here..."} |
| ``` |
|
|
| Use `--text-column` if your column has a different name. |
|
|
| ## Usage |
|
|
| View available options for any script: |
|
|
| ```bash |
| uv run https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-lfm2.5.py --help |
| ``` |
|
|
| ### LLM fine-tuning |
|
|
| Fine-tune [LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct), a compact and efficient text model from Liquid AI: |
|
|
| ```bash |
| hf jobs uv run \ |
| https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-lfm2.5.py \ |
| --flavor a10g-small --secrets HF_TOKEN --timeout 4h \ |
| -- --dataset mlabonne/FineTome-100k \ |
| --num-epochs 1 \ |
| --eval-split 0.2 \ |
| --output-repo your-username/lfm-finetuned |
| ``` |
|
|
| ### VLM fine-tuning |
|
|
| ```bash |
| hf jobs uv run \ |
| https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-qwen3-vl.py \ |
| --flavor a100-large --secrets HF_TOKEN \ |
| -- --dataset your-username/dataset \ |
| --trackio-space your-username/trackio \ |
| --output-repo your-username/my-model |
| ``` |
|
|
| ### Continued pretraining |
|
|
| ```bash |
| hf jobs uv run \ |
| https://huggingface.co/datasets/unsloth/jobs/raw/main/continued-pretraining.py \ |
| --flavor a100-large --secrets HF_TOKEN \ |
| -- --dataset your-username/domain-corpus \ |
| --text-column content \ |
| --max-steps 1000 \ |
| --output-repo your-username/domain-llm |
| ``` |
|
|
| ### With Trackio monitoring |
|
|
| ```bash |
| hf jobs uv run \ |
| https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-lfm2.5.py \ |
| --flavor a10g-small --secrets HF_TOKEN \ |
| -- --dataset mlabonne/FineTome-100k \ |
| --trackio-space your-username/trackio \ |
| --output-repo your-username/lfm-finetuned |
| ``` |
|
|
| ## Scripts |
|
|
| | Script | Base Model | Task | |
| | ------------------------------------------------------ | -------------------- | ----------------------------- | |
| | [`sft-lfm2.5.py`](sft-lfm2.5.py) | LFM2.5-1.2B-Instruct | LLM fine-tuning (recommended) | |
| | [`sft-qwen3-vl.py`](sft-qwen3-vl.py) | Qwen3-VL-8B | VLM fine-tuning | |
| | [`sft-gemma3-vlm.py`](sft-gemma3-vlm.py) | Gemma 3 4B | VLM fine-tuning (smaller) | |
| | [`continued-pretraining.py`](continued-pretraining.py) | Qwen3-0.6B | Domain adaptation | |
|
|
| ## Common Options |
|
|
| | Option | Description | Default | |
| | ------------------------- | -------------------------------------- | ------------ | |
| | `--dataset` | HF dataset ID | _required_ | |
| | `--output-repo` | Where to save trained model | _required_ | |
| | `--max-steps` | Number of training steps | 500 | |
| | `--num-epochs` | Train for N epochs instead of steps | - | |
| | `--eval-split` | Fraction for evaluation (e.g., 0.2) | 0 (disabled) | |
| | `--batch-size` | Per-device batch size | 2 | |
| | `--gradient-accumulation` | Gradient accumulation steps | 4 | |
| | `--lora-r` | LoRA rank | 16 | |
| | `--learning-rate` | Learning rate | 2e-4 | |
| | `--merge-model` | Upload merged model (not just adapter) | false | |
| | `--trackio-space` | HF Space for live monitoring | - | |
| | `--run-name` | Custom name for Trackio run | auto | |
|
|
| ## Tips |
|
|
| - Use `--max-steps 10` to verify everything works before a full run |
| - `--eval-split 0.1` helps detect overfitting |
| - Run `hf jobs hardware` to see GPU pricing (A100-large ~$2.50/hr, L40S ~$1.80/hr) |
| - Add `--streaming` for very large datasets |
| - First training step may take a few minutes (CUDA kernel compilation) |
|
|
| ## Links |
|
|
| - [HF Jobs Quickstart](https://huggingface.co/docs/hub/jobs-quickstart) |
| - [Unsloth Documentation](https://docs.unsloth.ai/) |
| - [UV Scripts Guide](https://docs.astral.sh/uv/guides/scripts/) |
|
|