| | ---
|
| | base_model:
|
| | - deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
|
| | - Qwen/QwQ-32B
|
| | - Qwen/Qwen2.5-Coder-32B-Instruct
|
| | - Qwen/Qwen2.5-32B
|
| | library_name: transformers
|
| | tags:
|
| | - mergekit
|
| | - merge
|
| | language:
|
| | - zho
|
| | - eng
|
| | - fra
|
| | - spa
|
| | - por
|
| | - deu
|
| | - ita
|
| | - rus
|
| | - jpn
|
| | - kor
|
| | - vie
|
| | - tha
|
| | - ara
|
| | ---
|
| | # tomasmcm/QwQ-Coder-R1-Distill-32B
|
| |
|
| | This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
| |
|
| | Following up on [tomasmcm/sky-t1-coder-32b-flash](https://huggingface.co/tomasmcm/sky-t1-coder-32b-flash), this experiment tries to merge 2 reasoning models based on Qwen 32B with a Coder model. But it seems to have caused the model to loose it's thinking abilities, even when adding `<think>` to the prompt.
|
| |
|
| |
|
| | ## Merge Details
|
| | ### Merge Method
|
| |
|
| | This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) as a base.
|
| |
|
| | ### Models Merged
|
| |
|
| | The following models were included in the merge:
|
| | * [deepseek-ai/DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B)
|
| | * [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B)
|
| | * [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)
|
| |
|
| | ### Configuration
|
| |
|
| | The following YAML configuration was used to produce this model:
|
| |
|
| | ```yaml
|
| | models:
|
| | # Pivot model
|
| | - model: Qwen/Qwen2.5-32B
|
| | # Target models
|
| | - model: Qwen/Qwen2.5-Coder-32B-Instruct
|
| | - model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
|
| | - model: Qwen/QwQ-32B
|
| | merge_method: sce
|
| | base_model: Qwen/Qwen2.5-32B
|
| | parameters:
|
| | select_topk: 1.0
|
| | dtype: bfloat16
|
| |
|
| | ```
|
| |
|