| | --- |
| | license: apache-2.0 |
| | language: |
| | - en |
| | pipeline_tag: text-generation |
| | tags: |
| | - deepbrainz |
| | - reasoning |
| | - mathematics |
| | - code |
| | - enterprise |
| | - 0.6b |
| | - long-context |
| | library_name: transformers |
| | --- |
| | |
| | # DeepBrainz-R1-0.6B |
| |
|
| | **DeepBrainz-R1-0.6B** is a compact, high-performance reasoning model engineered by **DeepBrainz AI & Labs**. It is part of the **DeepBrainz-R1 Series**, designed to deliver frontier-class reasoning capabilities in cost-effective parameter sizes. |
| |
|
| | This variant features a **32,768 token context window**, optimized for processing medium-to-long documents and codebases. |
| |
|
| | --- |
| |
|
| | ## π Model Highlights |
| |
|
| | - **Parameter Count:** ~0.6B |
| | - **Context Window:** 32,768 tokens |
| | - **Specialization:** STEM Reasoning, Logic, Code Analysis |
| | - **Architecture:** Optimized Dense Transformer |
| | - **Deployment:** Ready for vLLM, TGI, and local inference |
| |
|
| | --- |
| |
|
| | ## π― Intended Use Cases |
| |
|
| | - **Agentic Workflows:** Reliability in multi-step planning tasks. |
| | - **Math & Science:** Solving complex word problems and equations. |
| | - **Code Generation:** Writing and debugging algorithms. |
| | - **Structured Data Extraction:** Parsing and reasoning over unstructured text. |
| |
|
| | > **Note:** This is a post-trained reasoning variant intended for evaluation and experimentation. |
| | > It is not production-validated and is not optimized for open-ended conversational chat. |
| |
|
| | --- |
| |
|
| | ## π» Usage |
| |
|
| | ```python |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | |
| | model_id = "DeepBrainz/DeepBrainz-R1-0.6B" |
| | |
| | tokenizer = AutoTokenizer.from_pretrained(model_id) |
| | model = AutoModelForCausalLM.from_pretrained( |
| | model_id, |
| | torch_dtype="bfloat16", |
| | device_map="auto" |
| | ) |
| | |
| | prompt = "Analyze the time complexity of the following algorithm:" |
| | inputs = tokenizer(prompt, return_tensors="pt").to(model.device) |
| | |
| | outputs = model.generate(**inputs, max_new_tokens=256) |
| | print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
| | ``` |
| |
|
| | --- |
| |
|
| | ## ποΈ Technical Summary |
| |
|
| | This model has undergone post-training to enhance reasoning behavior and robustness under agentic workloads. |
| |
|
| | Detailed post-training recipes and dataset compositions are not fully disclosed. |
| |
|
| | --- |
| |
|
| | ## π License |
| |
|
| | This model is released under the **Apache 2.0** license, allowing for academic and commercial use. |
| |
|
| | --- |
| |
|
| | <div align="center"> |
| | <b>DeepBrainz AI & Labs</b><br> |
| | <i>Advancing General Intelligence through Scalable Reasoning</i> |
| | </div> |