Update README.md
Browse files
README.md
CHANGED
|
@@ -24,7 +24,7 @@ This model is part of the **DeepBrainz-R1 Series**, built to deliver frontier-cl
|
|
| 24 |
## 🚀 Model Highlights
|
| 25 |
|
| 26 |
- **Parameter Count:** ~2B
|
| 27 |
-
- **Context Window:** 40,960 tokens
|
| 28 |
- **Specialization:** STEM Reasoning, Logic, Code Analysis
|
| 29 |
- **Architecture:** Optimized Dense Transformer (Qwen2.5/3 Compatible)
|
| 30 |
- **Deployment:** Ready for vLLM, TGI, and local inference
|
|
@@ -38,7 +38,8 @@ This model is part of the **DeepBrainz-R1 Series**, built to deliver frontier-cl
|
|
| 38 |
- **Code Generation:** Writing and debugging algorithms.
|
| 39 |
- **Structured Data Extraction:** Parsing and reasoning over unstructured text.
|
| 40 |
|
| 41 |
-
> **Note:** This is a
|
|
|
|
| 42 |
|
| 43 |
---
|
| 44 |
|
|
@@ -65,6 +66,14 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
| 65 |
|
| 66 |
---
|
| 67 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
## 🛡️ Limitations & Safety
|
| 69 |
|
| 70 |
While this model demonstrates strong reasoning capabilities, it may still produce inaccurate information ("hallucinations"). Users should implement appropriate guardrails for production deployments.
|
|
|
|
| 24 |
## 🚀 Model Highlights
|
| 25 |
|
| 26 |
- **Parameter Count:** ~2B
|
| 27 |
+
- **Context Window:** up to 40,960 tokens (extended context; experimental)
|
| 28 |
- **Specialization:** STEM Reasoning, Logic, Code Analysis
|
| 29 |
- **Architecture:** Optimized Dense Transformer (Qwen2.5/3 Compatible)
|
| 30 |
- **Deployment:** Ready for vLLM, TGI, and local inference
|
|
|
|
| 38 |
- **Code Generation:** Writing and debugging algorithms.
|
| 39 |
- **Structured Data Extraction:** Parsing and reasoning over unstructured text.
|
| 40 |
|
| 41 |
+
> **Note:** This is a post-trained reasoning variant intended for evaluation and experimentation.
|
| 42 |
+
> It is not production-validated and is not optimized for open-ended conversational chat.
|
| 43 |
|
| 44 |
---
|
| 45 |
|
|
|
|
| 66 |
|
| 67 |
---
|
| 68 |
|
| 69 |
+
🏗️ Technical Summary
|
| 70 |
+
|
| 71 |
+
This model has undergone post-training to enhance reasoning behavior and robustness under agentic workloads.
|
| 72 |
+
|
| 73 |
+
Detailed post-training recipes and dataset compositions are not fully disclosed.
|
| 74 |
+
|
| 75 |
+
---
|
| 76 |
+
|
| 77 |
## 🛡️ Limitations & Safety
|
| 78 |
|
| 79 |
While this model demonstrates strong reasoning capabilities, it may still produce inaccurate information ("hallucinations"). Users should implement appropriate guardrails for production deployments.
|