File size: 1,676 Bytes
273d24d 41bf525 273d24d 41bf525 273d24d 41bf525 273d24d 41bf525 273d24d 41bf525 273d24d 41bf525 273d24d 41bf525 273d24d 41bf525 273d24d 41bf525 273d24d 41bf525 273d24d 41bf525 273d24d 41bf525 273d24d 41bf525 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | ---
license: cc-by-nc-nd-4.0
pipeline_tag: text-generation
library_name: transformers
tags:
- python
- coder
- developer-tools
- programming
- llm
---
# FastBit-450M-DeepCoder
FastBit-450M-DeepCoder is a lightweight LLM designed for Python code generation and logic processing. this model is optimized for high-speed inference on low-resource hardware like the **Intel i3-4150**.
## ⚖️ Terms of Use (License)
This model is licensed under the **Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International (CC-BY-NC-ND 4.0)**.
* **Attribution**: You must give credit to the project.
* **Non-Commercial**: You may not use this model for commercial purposes.
* **No Derivatives**: **You are strictly prohibited from modifying, remixing, or fine-tuning these weights.**
## 🚀 Implementation
To run this model locally, use the following Python script. Note: This model uses a custom weight file named `nanorons.safetensors`.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "imsuprtwo2/FastBit-450M-DeepCoder"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
dtype=torch.float32,
low_cpu_mem_usage=True,
trust_remote_code=True
)
prompt = "def calculate_factorial(n):"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
\```
## 🛠 Project Details
* **Model Name**: FastBit-450M
* **Parameters**: 450 Million
* **Optimization**: DeepCoder architecture for Python-specific tasks.
* **Status**: Active development by MASA. |