Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
library_name: gguf
|
| 6 |
+
tags:
|
| 7 |
+
- gguf
|
| 8 |
+
- shell
|
| 9 |
+
- macos
|
| 10 |
+
- terminal
|
| 11 |
+
- command-line
|
| 12 |
+
- qwen2
|
| 13 |
+
- lora
|
| 14 |
+
- ollama
|
| 15 |
+
base_model: Qwen/Qwen2.5-1.5B
|
| 16 |
+
model_name: harshell
|
| 17 |
+
pipeline_tag: text-generation
|
| 18 |
+
quantized_by: josharsh
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
# Harshell - Natural Language to macOS Shell Commands
|
| 22 |
+
|
| 23 |
+
**Harshell** is a fine-tuned Qwen 2.5 1.5B model that converts natural language into macOS shell commands. It returns only the command — no explanations, no markdown, just the shell command you need.
|
| 24 |
+
|
| 25 |
+
## Model Details
|
| 26 |
+
|
| 27 |
+
| Property | Value |
|
| 28 |
+
|---|---|
|
| 29 |
+
| Base Model | Qwen 2.5 1.5B |
|
| 30 |
+
| Fine-tuning | LoRA (rank 8, 1000 iterations) |
|
| 31 |
+
| Quantization | Q8_0 GGUF |
|
| 32 |
+
| File Size | ~1.5 GB |
|
| 33 |
+
| License | Apache 2.0 |
|
| 34 |
+
|
| 35 |
+
## Quick Start with Ollama
|
| 36 |
+
|
| 37 |
+
1. Download the GGUF and Modelfile from this repo
|
| 38 |
+
2. Create the model:
|
| 39 |
+
```bash
|
| 40 |
+
ollama create harshell -f Modelfile
|
| 41 |
+
```
|
| 42 |
+
3. Run it:
|
| 43 |
+
```bash
|
| 44 |
+
ollama run harshell "list all pdf files in my downloads folder"
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
### Example Usage
|
| 48 |
+
|
| 49 |
+
| Input | Output |
|
| 50 |
+
|---|---|
|
| 51 |
+
| list all pdf files in downloads | `find ~/Downloads -name "*.pdf"` |
|
| 52 |
+
| show disk usage of current folder | `du -sh .` |
|
| 53 |
+
| kill the process on port 3000 | `lsof -ti:3000 \| xargs kill` |
|
| 54 |
+
| compress this folder into a zip | `zip -r archive.zip .` |
|
| 55 |
+
| show my ip address | `ifconfig \| grep "inet " \| grep -v 127.0.0.1` |
|
| 56 |
+
|
| 57 |
+
## System Prompt
|
| 58 |
+
|
| 59 |
+
The model uses this system prompt:
|
| 60 |
+
|
| 61 |
+
> You are a macOS terminal assistant. Convert natural language into safe shell commands. Return only the command, nothing else.
|
| 62 |
+
|
| 63 |
+
## Ollama Modelfile
|
| 64 |
+
|
| 65 |
+
The included `Modelfile` configures:
|
| 66 |
+
- **Temperature**: 0.3 (low for deterministic command output)
|
| 67 |
+
- **Top-p**: 0.9
|
| 68 |
+
- **Max tokens**: 128
|
| 69 |
+
- **Chat template**: ChatML format (`<|im_start|>` / `<|im_end|>`)
|
| 70 |
+
|
| 71 |
+
## Training Details
|
| 72 |
+
|
| 73 |
+
- **Method**: LoRA (Low-Rank Adaptation)
|
| 74 |
+
- **LoRA Rank**: 8
|
| 75 |
+
- **Training iterations**: 1000
|
| 76 |
+
- **Base model**: Qwen/Qwen2.5-1.5B
|
| 77 |
+
- **Dataset**: Curated natural language → macOS shell command pairs
|
| 78 |
+
- **Quantization**: Converted to GGUF Q8_0 using llama.cpp
|
| 79 |
+
|
| 80 |
+
## Files
|
| 81 |
+
|
| 82 |
+
- `harsh-shell-q8_0.gguf` — The quantized model (Q8_0, ~1.5GB)
|
| 83 |
+
- `Modelfile` — Ollama configuration file
|
| 84 |
+
|
| 85 |
+
## Limitations
|
| 86 |
+
|
| 87 |
+
- Optimized for **macOS** commands; Linux/Windows commands may be less accurate
|
| 88 |
+
- Best for single-line commands; complex multi-line scripts may not generate correctly
|
| 89 |
+
- Always review generated commands before running them, especially destructive operations (`rm`, `mv`, etc.)
|