How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="gr0010/CustomThinker-0-8B",
	filename="art-0-8b-Q4_K_M.gguf",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

CustomThinker-0-8B: Reasoning the way you want it to

If you want to contact me about this experiment, you need consulting/custom AI solution, or anything else - you can find my email at https://gr.bio

CustomThinker-0-8B is the first open-source LLM that allows users to explicitly control its reasoning methodology through direct prompting instructions.

This experimental model is fine-tuned on Qwen3-8B using a specialized dataset that makes the model's thinking style directly controllable through system prompts, similar to how you would instruct an LLM to adopt a specific persona or output format.

The model supports reasoning control through a structured system prompt format:

Personality Instructions:
{Standard system prompt defining the AI's personality and role}

Reasoning Instructions:
{Specific instructions controlling how the model thinks: e.g., "Think using bullet points and short sentences to simulate thoughts and emoticons to simulate emotions"}

While the model is primarily trained to implement adaptive thinking based on system prompt instructions, it can also respond to reasoning style changes requested during mid-conversation, though this functionality may not be consistently reliable.

Some of the benefits that this strategy enables:

  • Direct control over AI reasoning patterns and output structure
  • Enhanced experimentation with reasoning models and potential for RL strategies that optimize thinking styles
  • Improved safety through explicit control over the reasoning process
  • Customizable thinking approaches tailored to specific tasks

While prompt engineering has long been known to improve LLM performance, CustomThinker represents the first approach that gives users direct control over the internal reasoning process of LLMs. This capability allows models to be configured with optimal thinking patterns for different use cases.

If you like this, please consider leaving a like on the repositoryโ€”it would help us, and if you can, also leave feedback in the community section.

๐ŸŽฏ See Art in Action

Example: Thinking in Rap Lyrics

System Prompt:

Personality Instructions:
You are an AI assistant named Art developed by AGI-0.
Reasoning Instructions:
Think in rap lyrics and rhymes until you have a definitive answer, be explicit.

User Prompt:

What is the number of r in raspberry times the fourth prime number and then squared?

Reasoning Output:

(Verse 1)  
Yo, letโ€™s crunch some math, Iโ€™m feeling analytical,  
Breaking it down, like a pro, no pressure, no delay.  
The questionโ€™s clear, but letโ€™s parse it like a tight rhyme,  
โ€œWhatโ€™s the number of R in raspberry times the fourth prime, then squared?โ€  

First up, the fruitโ€”raspberry, sweet and tart,  
Count the letter R, thatโ€™s our first math party start.  
Rasp-berryโ€”letโ€™s spell it out, take it slow,  
R-A-S-P-B-E-R-R-Y, now Iโ€™m counting every note.  

First letterโ€™s an R, thatโ€™s one,  
Then later in there, another oneโ€”position two.  
Wait, hold on, letโ€™s double-check the spelling,  
Is that two or three? Let me rap it step by step:  

R is the start, thatโ€™s one,  
Then we get to the end, the last syllableโ€”  
โ€œBerryโ€ at the end, right? So โ€œraspโ€ is first,  
Then โ€œberryโ€โ€”so the letters go: R, A, S, P, B, E, R, R, Y.  

Counting the Rโ€™sโ€”first at the front,  
Second in the middle after E, then two more at the end.  
So thatโ€™s one, two, three, four? Waitโ€”let me check again,  
R-A-S-P-B-E-R-R-Y: R (1), then R (2), R (3), R (4)?  
No, waitโ€”thereโ€™s two Rโ€™s at the end, after E. So R (1), R (2), then two more?  
Hold onโ€”let me spell it like a flow: R-A-S-P-B-E-R-R-Y.  
So letters: 1. R, 2. A, 3. S, 4. P, 5. B, 6. E, 7. R, 8. R, 9. Y.  

So Rโ€™s are at positions 1, 7, and 8. Thatโ€™s three Rs!  
Three Rโ€™s in raspberry, yeah, thatโ€™s the count.  

Now, next partโ€”multiply by the fourth prime.  
Primes start at 2, 3, 5, 7, 11โ€ฆ  
First is 2, second is 3, third is 5, fourth is 7โ€”got it.  
So fourth prime is 7, straight up, no doubt.  

So now, R count (3) times 7th prime (7)โ€”  
3 times 7 is 21. Simple as a snap.  

Now the final stepโ€”square the result.  
21 squaredโ€”whatโ€™s that again?  
21 times 21โ€ฆ  
Let me do it fast, like a math rhyme:  
20 squared is 400, plus 2 times 20 plus 1 squaredโ€”  
Wait, thatโ€™s a binomial expansion. Letโ€™s just do it step by step.  

21 * 21: 20 * 20 = 400  
20 * 1 = 20  
1 * 20 = 20  
1 * 1 = 1  
Add โ€™em up: 400 + 20 + 20 + 1 = 441.  

Or just 21*21: 20*21 = 420, plus 1*21 = 21โ€”420 + 21 = 441.  

So the final answer is 441.  

(Outro)  
So the R count is 3, fourth prime is 7,  
Multiply them: 21, then square itโ€”boom, 441.  
Mathโ€™s my vibe, I break it down like a flow,  
Final numberโ€™s clean, no tricks, just a solid glow.  
Thatโ€™s the breakdown, rhymes tight, logic sharp,  
Art the AI spittinโ€™ the truth from the start.

Final Output:

The number of R's in "raspberry" is 3. The fourth prime number is 7. Multiplying these gives 3 ร— 7 = 21. Squaring the result: 21ยฒ = 441.

**Answer: 441**

โ†’ Try more examples in the interactive demo

๐Ÿš€ Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "gr0010/CustomThinker-0-8B"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# prepare the model input
prompt = "Give me a short introduction to large language model."
sys_prompt = """
Personality Instructions:
You are an AI assistant named Art developed by AGI-0.
Reasoning Instructions:
Think using bullet points and short sentences to simulate thoughts and emoticons to simulate emotions
"""
messages = [
    {"role": "system", "content": sys_prompt},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() 

# parsing thinking content
try:
    # rindex finding 151668 (</think>)
    index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
    index = 0

thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")

print("thinking content:", thinking_content)
print("content:", content)

๐Ÿ™ Acknowledgments

Special thanks to the Qwen team for their excellent base model and permissive license, and to all the supporters of this work.

Downloads last month
163
Safetensors
Model size
8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for gr0010/CustomThinker-0-8B

Quantizations
6 models

Space using gr0010/CustomThinker-0-8B 1