File size: 5,830 Bytes
63d00f7 13ca4f5 63d00f7 13ca4f5 63d00f7 13ca4f5 ac3b4f6 13ca4f5 63d00f7 13ca4f5 63d00f7 13ca4f5 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | ---
license: other
library_name: transformers
base_model:
- LucidityAI/Astral-4B-Coder
- openfree/Darwin-Qwen3-4B
- Qwen/Qwen3-4B
tags:
- qwen3
- mergekit
- merge
- text-generation-inference
- code
- coder
- withinusai
language:
- en
datasets:
- LucidityAI/Astral-Post-Training-Dataset
pipeline_tag: text-generation
---
# Darwin-Astral-4B-Coder
**Darwin-Astral-4B-Coder** is a merged 4B-class coding model release from **WithIn Us AI**, designed for code generation, instruction-following, and practical developer-assistant workflows.
This repository is distributed as a standard **Transformers** checkpoint in **Safetensors** format and is positioned as a merge-based model that blends Darwin-style and Astral-style coding traits within a Qwen3-family 4B backbone.
## Model Summary
This model is intended for:
- code generation
- code explanation
- debugging assistance
- implementation planning
- instruction-following
- developer assistant workflows
- local or hosted coding inference
As a 4B-class model, it aims to balance stronger coding capability than very small models with a lighter deployment footprint than larger coder checkpoints.
## Base Model Lineage
The current repository metadata lists the following upstream model references:
- `LucidityAI/Astral-4B-Coder`
- `openfree/Darwin-Qwen3-4B`
- `Qwen/Qwen3-4B`
The visible merge configuration in the README also shows:
- `Qwen/Qwen3-4B-Instruct-2507` as the base model in the YAML block
- `Lucidity-AI-Astral-4B-Coder` as a merge source
- `openfree-Darwin-Qwen3-4B` as a merge source
These names are preserved here as shown on the repository page.
## Merge Details
According to the current README:
- this model is a merge of pre-trained language models
- it was created using **mergekit**
- the **SLERP** merge method was used
The repository also includes a visible `mergekit_config.yml`, which supports the merge-based packaging of the release.
## Dataset Lineage
The repository page currently shows the following dataset association:
- `LucidityAI/Astral-Post-Training-Dataset`
This suggests coding or post-training lineage connected to the Astral family used in the merge.
## Intended Use
Recommended use cases include:
- coding assistant experiments
- generating utility functions and scripts
- explaining code and technical concepts
- debugging support
- step-by-step implementation planning
- local developer tools
- hosted text-generation workflows for software tasks
## Suggested Use Cases
This model can be useful for:
- drafting Python, JavaScript, or general-purpose code
- proposing refactors
- generating boilerplate
- answering developer questions
- comparing implementation approaches
- producing structured technical responses
## Out-of-Scope Use
This model should not be relied on for:
- legal advice
- medical advice
- financial advice
- safety-critical automation
- autonomous production engineering without review
- security-critical code without expert validation
All generated code should be reviewed, tested, and validated before real-world deployment.
## Repository Contents
The repository currently includes standard Hugging Face model assets such as:
- `README.md`
- `.gitattributes`
- `added_tokens.json`
- `config.json`
- `mergekit_config.yml`
- `merges.txt`
- `model-00001-of-00002.safetensors`
- `model-00002-of-00002.safetensors`
- `model.safetensors.index.json`
- `special_tokens_map.json`
- `tokenizer.json`
- `tokenizer_config.json`
## Prompting Guidance
This model will usually work best with prompts that are:
- direct
- scoped to a clear task
- explicit about the language or framework
- clear about whether code, explanation, or both are wanted
- structured when step-by-step reasoning is useful
### Example prompt styles
**Code generation**
> Write a Python function that loads a JSON file, validates required keys, and returns cleaned records.
**Debugging**
> Explain why this code raises a KeyError and provide a safer corrected version.
**Implementation planning**
> Create a step-by-step plan for building a FastAPI service with authentication, logging, and tests.
**Refactoring**
> Refactor this function for readability and add basic error handling.
## Strengths
This model may be especially useful for:
- blended coding workflows
- practical developer assistance
- moderate-size local inference
- structured software-task prompting
- merge-model experimentation
- compact coder deployments
## Limitations
Like other merged 4B-class language models, this model may:
- hallucinate APIs or implementation details
- generate incomplete or incorrect code
- produce insecure patterns
- make reasoning mistakes on harder prompts
- require prompt iteration for best results
- need human validation before real-world use
## Attribution
**WithIn Us AI** is the publisher of this merged model release.
Credit for upstream assets remains with their original creators. The repository metadata and README specifically reference:
- `LucidityAI/Astral-4B-Coder`
- `openfree/Darwin-Qwen3-4B`
- `Qwen/Qwen3-4B`
- `Qwen/Qwen3-4B-Instruct-2507`
and the dataset:
- `LucidityAI/Astral-Post-Training-Dataset`
## License
This draft uses:
- `license: other`
If you maintain this repo, replace this with the exact license terms you want displayed and make sure they align with any upstream obligations from the referenced source models and datasets.
## Acknowledgments
Thanks to:
- **WithIn Us AI**
- **LucidityAI**
- **openfree**
- **Qwen**
- the **mergekit** ecosystem
- the Hugging Face platform
- the broader open-source LLM community
## Disclaimer
This model may produce inaccurate, insecure, biased, incomplete, or misleading outputs. All important generations, especially code and technical guidance, should be reviewed and tested before real-world use. |