| --- |
| license: apache-2.0 |
| language: |
| - en |
| library_name: gguf |
| tags: |
| - gguf |
| - shell |
| - macos |
| - terminal |
| - command-line |
| - qwen2 |
| - lora |
| - ollama |
| base_model: Qwen/Qwen2.5-1.5B |
| model_name: harshell |
| pipeline_tag: text-generation |
| quantized_by: josharsh |
| --- |
| |
| # Harshell - Natural Language to macOS Shell Commands |
|
|
| **Harshell** is a fine-tuned Qwen 2.5 1.5B model that converts natural language into macOS shell commands. It returns only the command — no explanations, no markdown, just the shell command you need. |
|
|
| ## Model Details |
|
|
| | Property | Value | |
| |---|---| |
| | Base Model | Qwen 2.5 1.5B | |
| | Fine-tuning | LoRA (rank 8, 1000 iterations) | |
| | Quantization | Q8_0 GGUF | |
| | File Size | ~1.5 GB | |
| | License | Apache 2.0 | |
| |
| ## Quick Start with Ollama |
| |
| 1. Download the GGUF and Modelfile from this repo |
| 2. Create the model: |
| ```bash |
| ollama create harshell -f Modelfile |
| ``` |
| 3. Run it: |
| ```bash |
| ollama run harshell "list all pdf files in my downloads folder" |
| ``` |
| |
| ### Example Usage |
| |
| | Input | Output | |
| |---|---| |
| | list all pdf files in downloads | `find ~/Downloads -name "*.pdf"` | |
| | show disk usage of current folder | `du -sh .` | |
| | kill the process on port 3000 | `lsof -ti:3000 \| xargs kill` | |
| | compress this folder into a zip | `zip -r archive.zip .` | |
| | show my ip address | `ifconfig \| grep "inet " \| grep -v 127.0.0.1` | |
| |
| ## System Prompt |
| |
| The model uses this system prompt: |
| |
| > You are a macOS terminal assistant. Convert natural language into safe shell commands. Return only the command, nothing else. |
| |
| ## Ollama Modelfile |
| |
| The included `Modelfile` configures: |
| - **Temperature**: 0.3 (low for deterministic command output) |
| - **Top-p**: 0.9 |
| - **Max tokens**: 128 |
| - **Chat template**: ChatML format (`<|im_start|>` / `<|im_end|>`) |
| |
| ## Training Details |
| |
| - **Method**: LoRA (Low-Rank Adaptation) |
| - **LoRA Rank**: 8 |
| - **Training iterations**: 1000 |
| - **Base model**: Qwen/Qwen2.5-1.5B |
| - **Dataset**: Curated natural language → macOS shell command pairs |
| - **Quantization**: Converted to GGUF Q8_0 using llama.cpp |
|
|
| ## Files |
|
|
| - `harsh-shell-q8_0.gguf` — The quantized model (Q8_0, ~1.5GB) |
| - `Modelfile` — Ollama configuration file |
| |
| ## Limitations |
| |
| - Optimized for **macOS** commands; Linux/Windows commands may be less accurate |
| - Best for single-line commands; complex multi-line scripts may not generate correctly |
| - Always review generated commands before running them, especially destructive operations (`rm`, `mv`, etc.) |
| |