| | --- |
| | license: cc-by-nc-nd-4.0 |
| | pipeline_tag: text-generation |
| | library_name: transformers |
| | tags: |
| | - python |
| | - coder |
| | - developer-tools |
| | - programming |
| | - llm |
| | --- |
| | |
| | # FastBit-450M-DeepCoder |
| |
|
| | FastBit-450M-DeepCoder is a lightweight LLM designed for Python code generation and logic processing. this model is optimized for high-speed inference on low-resource hardware like the **Intel i3-4150**. |
| |
|
| | ## โ๏ธ Terms of Use (License) |
| | This model is licensed under the **Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International (CC-BY-NC-ND 4.0)**. |
| |
|
| | * **Attribution**: You must give credit to the project. |
| | * **Non-Commercial**: You may not use this model for commercial purposes. |
| | * **No Derivatives**: **You are strictly prohibited from modifying, remixing, or fine-tuning these weights.** |
| |
|
| | ## ๐ Implementation |
| | To run this model locally, use the following Python script. Note: This model uses a custom weight file named `nanorons.safetensors`. |
| |
|
| | ```python |
| | import torch |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | |
| | model_id = "imsuprtwo2/FastBit-450M-DeepCoder" |
| | |
| | tokenizer = AutoTokenizer.from_pretrained(model_id) |
| | model = AutoModelForCausalLM.from_pretrained( |
| | model_id, |
| | dtype=torch.float32, |
| | low_cpu_mem_usage=True, |
| | trust_remote_code=True |
| | ) |
| | |
| | prompt = "def calculate_factorial(n):" |
| | inputs = tokenizer(prompt, return_tensors="pt") |
| | |
| | outputs = model.generate(**inputs, max_new_tokens=100) |
| | print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
| | \``` |
| |
|
| | ## ๐ Project Details |
| | * **Model Name**: FastBit-450M |
| | * **Parameters**: 450 Million |
| | * **Optimization**: DeepCoder architecture for Python-specific tasks. |
| | * **Status**: Active development by MASA. |