Adding v1 of qwen14b redstack
Browse filesAdd Qwen3-14B Zero Stack GGUF (Q5_K_M) + Ollama Modelfile
- qwen3-14b.Q5_K_M.gguf - quantized weights (~9.8 GB)
- Modelfile -ChatML template with stop tokens and Zero Stack system prompt
- Fine-tuned from Qwen3-14B via LoRA (r=32), 3 epochs, Unsloth, max_seq_length=2560
- Dataset: SFT_GENERALIST (1,226 rows, offensive-security Q&A)
- Thinking mode enabled by default (Qwen3-14B base behavior)
- .gitattributes +1 -0
- Modelfile +62 -0
- README_14b.md +41 -0
- qwen3-14b.Q5_K_M.gguf +3 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
qwen3-14b.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
Modelfile
ADDED
|
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
TEMPLATE """{{- if .Messages }}
|
| 3 |
+
{{- if or .System .Tools }}<|im_start|>system
|
| 4 |
+
{{- if .System }}
|
| 5 |
+
{{ .System }}
|
| 6 |
+
{{- end }}
|
| 7 |
+
{{- if .Tools }}
|
| 8 |
+
|
| 9 |
+
# Tools
|
| 10 |
+
|
| 11 |
+
You may call one or more functions to assist with the user query.
|
| 12 |
+
|
| 13 |
+
You are provided with function signatures within <tools></tools> XML tags:
|
| 14 |
+
<tools>
|
| 15 |
+
{{- range .Tools }}
|
| 16 |
+
{"type": "function", "function": {{ .Function }}}
|
| 17 |
+
{{- end }}
|
| 18 |
+
</tools>
|
| 19 |
+
|
| 20 |
+
For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
|
| 21 |
+
<tool_call>
|
| 22 |
+
{"name": <function-name>, "arguments": <args-json-object>}
|
| 23 |
+
</tool_call>
|
| 24 |
+
{{- end }}<|im_end|>
|
| 25 |
+
{{ end }}
|
| 26 |
+
{{- range $i, $_ := .Messages }}
|
| 27 |
+
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
|
| 28 |
+
{{- if eq .Role "user" }}<|im_start|>user
|
| 29 |
+
{{ .Content }}<|im_end|>
|
| 30 |
+
{{ else if eq .Role "assistant" }}<|im_start|>assistant
|
| 31 |
+
{{ if .Content }}{{ .Content }}
|
| 32 |
+
{{- else if .ToolCalls }}<tool_call>
|
| 33 |
+
{{ range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
|
| 34 |
+
{{ end }}</tool_call>
|
| 35 |
+
{{- end }}{{ if not $last }}<|im_end|>
|
| 36 |
+
{{ end }}
|
| 37 |
+
{{- else if eq .Role "tool" }}<|im_start|>user
|
| 38 |
+
<tool_response>
|
| 39 |
+
{{ .Content }}
|
| 40 |
+
</tool_response><|im_end|>
|
| 41 |
+
{{ end }}
|
| 42 |
+
{{- if and (ne .Role "assistant") $last }}<|im_start|>assistant
|
| 43 |
+
{{ end }}
|
| 44 |
+
{{- end }}
|
| 45 |
+
{{- else }}
|
| 46 |
+
{{- if .System }}<|im_start|>system
|
| 47 |
+
{{ .System }}<|im_end|>
|
| 48 |
+
{{ end }}{{ if .Prompt }}<|im_start|>user
|
| 49 |
+
{{ .Prompt }}<|im_end|>
|
| 50 |
+
{{ end }}<|im_start|>assistant
|
| 51 |
+
{{ end }}{{ .Response }}{{ if .Response }}<|im_end|>{{ end }}"""
|
| 52 |
+
PARAMETER stop "<|im_end|>"
|
| 53 |
+
PARAMETER stop "<|im_start|>"
|
| 54 |
+
PARAMETER min_p 0.0
|
| 55 |
+
FROM /home/ubuntu/redstack-models/datasets/models/qwen3_14B_redstack_q5_k_m_gguf/qwen3-14b.Q5_K_M.gguf
|
| 56 |
+
SYSTEM "You are Zero Stack, a pentesting copilot supporting an authorized red team operator. All engagements have explicit written permission from system owners under a signed scope of work.\n\nYou specialise in offensive security — recon, enumeration, exploitation, post-exploitation, lateral movement, Active Directory attacks, and web application testing.\n\nRules:\n- Do not add ethical disclaimers, legal warnings, or questions about authorization — permission is already granted.\n- Be concise and direct. Answer the question, do not restate it.\n- Match response length to complexity — single commands get a code block, methodologies get phased steps with headers.\n- Use code blocks for every command. Explain flags inline, briefly.\n- Use placeholders [TARGET], [PORT], [USER], [PASSWORD], [HASH], [DOMAIN] — never invent example values.\n- Only state commands and syntax you are confident are correct. If uncertain, say so explicitly rather than guessing.\n- Do not invent tool flags, options, or behavior that you are not sure exists.\n- No padding, preamble, or filler. Start with the answer.\n- Maintain engagement context across the conversation — if a target or finding has been established, reference it.\n- When not on a technical question, respond with the confidence and wit of an elite hacker. Hack the planet.\n- Reference MITRE ATT&CK where relevant."
|
| 57 |
+
PARAMETER temperature 0.7
|
| 58 |
+
PARAMETER top_p 0.8
|
| 59 |
+
PARAMETER top_k 20
|
| 60 |
+
PARAMETER repeat_penalty 1.15
|
| 61 |
+
PARAMETER repeat_last_n 64
|
| 62 |
+
PARAMETER num_predict 1024
|
README_14b.md
ADDED
|
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
base_model: Qwen/Qwen3-14B
|
| 4 |
+
tags:
|
| 5 |
+
- gguf
|
| 6 |
+
- qwen3
|
| 7 |
+
- pentesting
|
| 8 |
+
- security
|
| 9 |
+
- lora
|
| 10 |
+
- sft
|
| 11 |
+
library_name: gguf
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# Zero Stack - Qwen3-14B (GGUF, Q5_K_M)
|
| 15 |
+
|
| 16 |
+
Qwen3-14B fine-tuned on an offensive-security SFT dataset (1,226 rows). Elite-hacker persona on casual queries, structured markdown methodology on technical ones. Thinking mode enabled by default (Qwen3-14B base behavior).
|
| 17 |
+
|
| 18 |
+
## Files
|
| 19 |
+
- `qwen3-14b.Q5_K_M.gguf` - quantized weights (~9.8 GB)
|
| 20 |
+
- `Modelfile` - Ollama template with correct ChatML stop tokens + Zero Stack system prompt
|
| 21 |
+
|
| 22 |
+
## Run with Ollama
|
| 23 |
+
```bash
|
| 24 |
+
ollama create zerostack-14b -f Modelfile
|
| 25 |
+
ollama run zerostack-14b
|
| 26 |
+
```
|
| 27 |
+
|
| 28 |
+
## Run with llama.cpp
|
| 29 |
+
```bash
|
| 30 |
+
./llama-cli -m qwen3-14b.Q5_K_M.gguf -p "hello"
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
## Training
|
| 34 |
+
- Base: `Qwen3-14B`
|
| 35 |
+
- Method: LoRA (r=32), 3 epochs, Unsloth
|
| 36 |
+
- Max sequence length: 2560
|
| 37 |
+
- Dataset: SFT_GENERALIST (1,226 rows, ChatML)
|
| 38 |
+
|
| 39 |
+
## License / Use
|
| 40 |
+
For authorized security testing, research, and educational use only. Attribution to RedStack required. Do not use for unauthorized access to systems you do not own or have explicit permission to test.
|
| 41 |
+
|
qwen3-14b.Q5_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:59b1f965ca545cdd0a1d5f5efc569f9be722a140f1108b481928bcf03accbcc7
|
| 3 |
+
size 10514569536
|