Foam GPT 9 models: Base Qwen3-Coder-30B-A3B-Instruct
Collection
9 items • Updated
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("finalform/error_analysis_Qwen3-Coder-30B-A3B-Instruct", dtype="auto")This model is a fine-tuned version of Qwen/Qwen3-Coder-30B-A3B-Instruct on the train dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 0.7592 | 1.7857 | 25 | 0.7807 | 0.7846 |
| 0.5326 | 3.5714 | 50 | 0.7476 | 0.7973 |
Base model
Qwen/Qwen3-Coder-30B-A3B-Instruct
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="finalform/error_analysis_Qwen3-Coder-30B-A3B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)