Foam GPT 9 models: Base Qwen3-Coder-30B-A3B-Instruct
Collection
9 items • Updated
This model is a fine-tuned version of Qwen/Qwen3-Coder-30B-A3B-Instruct on the train dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| No log | 0.1481 | 2 | 0.2477 | 0.9611 |
| 0.2441 | 0.2963 | 4 | 0.1997 | 0.9648 |
| 0.2367 | 0.4444 | 6 | 0.1587 | 0.9688 |
| 0.2367 | 0.5926 | 8 | 0.1476 | 0.9710 |
| 0.1895 | 0.7407 | 10 | 0.1318 | 0.9732 |
| 0.1361 | 0.8889 | 12 | 0.1172 | 0.9759 |
| 0.1361 | 1.0 | 14 | 0.1053 | 0.9783 |
| 0.18 | 1.1481 | 16 | 0.0985 | 0.9792 |
| 0.1193 | 1.2963 | 18 | 0.0932 | 0.9798 |
| 0.1193 | 1.4444 | 20 | 0.0875 | 0.9804 |
| 0.0823 | 1.5926 | 22 | 0.0840 | 0.9806 |
| 0.1175 | 1.7407 | 24 | 0.0778 | 0.9814 |
| 0.1175 | 1.8889 | 26 | 0.0737 | 0.9827 |
| 0.0898 | 2.0 | 28 | 0.0704 | 0.9836 |
| 0.0948 | 2.1481 | 30 | 0.0689 | 0.9838 |
| 0.0948 | 2.2963 | 32 | 0.0670 | 0.9841 |
| 0.0739 | 2.4444 | 34 | 0.0653 | 0.9841 |
| 0.0526 | 2.5926 | 36 | 0.0643 | 0.9845 |
| 0.0526 | 2.7407 | 38 | 0.0634 | 0.9845 |
| 0.0564 | 2.8889 | 40 | 0.0625 | 0.9847 |
| 0.0678 | 3.0 | 42 | 0.0616 | 0.9850 |
| 0.0678 | 3.1481 | 44 | 0.0615 | 0.9852 |
| 0.0499 | 3.2963 | 46 | 0.0616 | 0.9853 |
| 0.0437 | 3.4444 | 48 | 0.0620 | 0.9853 |
| 0.0437 | 3.5926 | 50 | 0.0621 | 0.9851 |
| 0.0421 | 3.7407 | 52 | 0.0624 | 0.9852 |
| 0.0557 | 3.8889 | 54 | 0.0624 | 0.9852 |
| 0.0557 | 4.0 | 56 | 0.0623 | 0.9851 |
Base model
Qwen/Qwen3-Coder-30B-A3B-Instruct