| --- |
| base_model: |
| - SicariusSicariiStuff/Impish_QWEN_14B-1M |
| - Qwen/Qwen2.5-14B-Instruct |
| - sometimesanotion/LamarckInfusion-14B-v1 |
| - Qwen/Qwen2.5-Coder-14B |
| - suayptalha/Lamarckvergence-14B |
| - huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 |
| - Qwen/Qwen2.5-14B |
| - tanliboy/lambda-qwen2.5-14b-dpo-test |
| - deepseek-ai/DeepSeek-R1-Distill-Qwen-14B |
| library_name: transformers |
| tags: |
| - mergekit |
| - merge |
| license: mit |
| --- |
| # merge |
|
|
| This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
| ## Merge Details |
| ### Merge Method |
|
|
| This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) as a base. |
|
|
| ### Models Merged |
|
|
| The following models were included in the merge: |
| * [SicariusSicariiStuff/Impish_QWEN_14B-1M](https://huggingface.co/SicariusSicariiStuff/Impish_QWEN_14B-1M) |
| * [sometimesanotion/LamarckInfusion-14B-v1](https://huggingface.co/sometimesanotion/LamarckInfusion-14B-v1) |
| * [Qwen/Qwen2.5-Coder-14B](https://huggingface.co/Qwen/Qwen2.5-Coder-14B) |
| * [suayptalha/Lamarckvergence-14B](https://huggingface.co/suayptalha/Lamarckvergence-14B) |
| * [huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2) |
| * [Qwen/Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) |
| * [tanliboy/lambda-qwen2.5-14b-dpo-test](https://huggingface.co/tanliboy/lambda-qwen2.5-14b-dpo-test) |
| * [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|
|
| ### Configuration |
|
|
| The following YAML configuration was used to produce this model: |
|
|
| ```yaml |
| models: |
| - model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B #logic |
| - model: huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2 |
| - model: Qwen/Qwen2.5-14B #text generation |
| - model: Qwen/Qwen2.5-14B-Instruct #chat assistant |
| - model: Qwen/Qwen2.5-Coder-14B #coding |
| - model: sometimesanotion/LamarckInfusion-14B-v1 |
| - model: suayptalha/Lamarckvergence-14B |
| - model: tanliboy/lambda-qwen2.5-14b-dpo-test |
| - model: SicariusSicariiStuff/Impish_QWEN_14B-1M |
| |
| merge_method: model_stock |
| base_model: Qwen/Qwen2.5-14B-Instruct |
| normalize: true |
| int8_mask: true |
| dtype: bfloat16 |
| |
| ``` |