File size: 7,522 Bytes
86a259e f80e499 86a259e 13174de 86a259e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 | ---
license: mit
language:
- en
pipeline_tag: text-generation
library_name: cpp
tags:
- creative-writing
- ollama
- code-execution
- productivity
- offline-ai
- local-llm
---
# Pencilclaw v1.0 (Testing) βοΈ
```
βββββββ ββββββββββββ βββ βββββββββββββ ββββββββββ ββββββ βββ βββ
βββββββββββββββββββββ βββββββββββββββββ βββββββββββ βββββββββββ βββ
ββββββββββββββ ββββββ ββββββ ββββββ βββ βββ βββββββββββ ββ βββ
βββββββ ββββββ βββββββββββββ ββββββ βββ βββ ββββββββββββββββββ
βββ βββββββββββ βββββββββββββββββββββββββββββββββββββββββββ βββββββββββββ
βββ βββββββββββ βββββ ββββββββββββββββββ βββββββββββββββββ βββ ββββββββ
```
**PENCILCLAW** is a C++ command-line tool that turns your local [Ollama](https://ollama.com/) instance into a creative writing partner with the ability to execute generated C++ code. It follows a simple ADA-style command interface - perfect for writers, tinkerers, and AI enthusiasts who want to keep their data private and their workflows offline.
---
## Features
- **Story & Poem Generation** - Use `/STORY` or `/POEM` with a title/subject to get creative text from your local LLM.
- **Book Continuation** - The `/BOOK` command appends new chapters to a running `book.txt`, maintaining context from previous content.
- **Code Execution** - If the AI responds with a C++ code block (triple backticks), `/EXECUTE` compiles and runs it - ideal for prototyping or exploring AI-generated algorithms.
- **Session Logging** - All interactions are saved in `pencil_data/session.log` for later reference.
- **Workspace Isolation** - Everything lives in the `./pencil_data/` folder; temporary files are cleaned up after execution.
- **Security Awareness** - Includes filename sanitisation and a confirmation prompt before running any AI-generated code.
---
## Project Structure
All necessary files for PENCILCLAW are contained within the `/home/kali/pencilclaw/` directory. Below is the complete tree:
```
/home/kali/pencilclaw/
βββ pencilclaw.cpp # Main program source
βββ pencil_utils.hpp # Workspace and template helpers
βββ pencilclaw # Compiled executable (after build)
βββ pencil_data/ # **Created automatically on first run**
βββ session.log # Full interaction log
βββ book.txt # Accumulated book chapters
βββ temp_code.cpp # Temporary source file (deleted after execution)
βββ temp_code # Temporary executable (deleted after execution)
βββ [story/poem files] # Individual .txt files for each /STORY or /POEM
```
**The `pencil_data` directory is created automatically when you run the program. All generated content and logs reside there.**
---
## Requirements
- **libcurl** development libraries
- **cJSON** library
- **Ollama** installed and running
- A model pulled in Ollama (default: `qwen2.5:0.5b` - change in source if desired)
---
## Installation
### 1. Install System Dependencies
```bash
sudo apt update
sudo apt install -y build-essential libcurl4-openssl-dev
```
### 2. Install cJSON
If your distribution does not provide a package, build from source:
```bash
git clone https://github.com/DaveGamble/cJSON.git
cd cJSON
mkdir build && cd build
cmake ..
make
sudo make install
sudo ldconfig
cd ../..
```
### 3. Install Ollama
```bash
curl -fsSL https://ollama.com/install.sh | sh
ollama serve & # start the service
ollama pull qwen2.5:0.5b # or another model of your choice
```
## Custom Models
Edit line 36 of the pencilclaw.cpp file:
```
// Model name β change this to match your installed model (e.g., "llama3", "qwen2.5", "mistral")
const std::string MODEL_NAME = "qwen2.5:0.5b";
```
### 4. Compile PENCILCLAW
Place the source files in the same directory and compile:
```bash
g++ -std=c++17 -o pencilclaw pencilclaw.cpp -lcurl -lcjson
```
If cJSON headers are in a non-standard location (e.g., `/usr/local/include/cjson`), add the appropriate `-I` flag:
```bash
g++ -std=c++17 -o pencilclaw pencilclaw.cpp -lcurl -lcjson -I/usr/local/include/cjson
```
---
## Usage
Start the program:
```bash
./pencilclaw
```
You will see the `>` prompt. Commands are case-sensitive and start with `/`.
### Available Commands
| Command | Description |
|-------------------|-----------------------------------------------------------------------------|
| `/HELP` | Show this help message. |
| `/STORY <title>` | Generate a short story with the given title. Saved as `<title>.txt`. |
| `/POEM <subject>` | Compose a poem about the subject. Saved as `<subject>.txt`. |
| `/BOOK <chapter>` | Append a new chapter to `book.txt` (creates file if it doesn't exist). |
| `/EXECUTE` | Compile and run the first C++ code block from the last AI response. |
| `/DEBUG` | Toggle verbose debug output (shows JSON requests/responses). |
| `/EXIT` | Quit the program. |
Any line not starting with `/` is sent directly to Ollama as a free prompt; the response is displayed and logged.
---
## Security Notes
- **Code execution is a powerful feature.** PENCILCLAW asks for confirmation before running any AI-generated code. Always review the code if you are unsure.
- **Filename sanitisation** prevents path traversal attacks (e.g., `../../etc/passwd` becomes `____etc_passwd`).
- All operations are confined to the `pencil_data` subdirectory; no system-wide changes are made.
---
## Customisation
- **Model**: Change the `MODEL_NAME` constant in `pencilclaw.cpp` to use a different Ollama model.
- **Prompts**: Edit the templates in `pencil_utils.hpp` (`get_template` function) to adjust the AI's behaviour.
- **Timeout**: The default HTTP timeout is 60 seconds. Adjust `CURLOPT_TIMEOUT` in the source if needed.
---
## Troubleshooting
| Problem | Solution |
|----------------------------------|----------------------------------------------------------------|
| `cJSON.h: No such file or directory` | Install cJSON or add the correct `-I` flag during compilation. |
| `curl failed: Timeout was reached` | Ensure Ollama is running (`ollama serve`) and the model is pulled. |
| Model not found | Run `ollama pull <model_name>` (e.g., `qwen2.5:0.5b`). |
| Compilation errors (C++17) | Use a compiler that supports `-std=c++17` (g++ 7+ or clang 5+).|
---
## License
This project is released under the MIT License. Built with C++ and Ollama.
|