Add metadata and links to paper and code

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +11 -6
README.md CHANGED
@@ -1,19 +1,22 @@
1
  ---
2
  language:
3
  - en
4
- ...
 
 
5
  ---
 
6
  # AscendKernelGen/KernelGen-LM-32B
7
 
8
  ![License](https://img.shields.io/badge/License-Apache-yellow)
9
  [![arXiv](https://img.shields.io/badge/arXiv-2601.07160-b31b1b.svg)](https://arxiv.org/abs/2601.07160)
10
 
11
- KernelGen-LM-32B is a state-of-the-art domain-adaptive large language model specialized for low-level NPU kernel generation, specifically for the Huawei Ascend architecture using the AscendC programming language. Built upon the Qwen3-32B backbone, it is trained on the Ascend-CoT dataset and refined via reinforcement learning with execution feedback. It achieves unprecedented success rates in generating complex, functional hardware kernels, improving compilation success on L2 tasks from 0% (baseline) to 96.5% (Pass@10), while functional correctness achieves
12
- 40.5% compared to the baseline’s complete failure.
13
 
14
- **Other artifacts:**
15
- * The **AscendKernelGen Technical Report** is published at https://arxiv.org/abs/2601.07160.
16
- * The **NPUKernelBench** evaluation framework is published at https://git.openi.org.cn/PCL-Benchmark/NPUKernelBench.
 
17
 
18
  ## Introduction
19
 
@@ -25,6 +28,7 @@ Our framework, **AscendKernelGen (AKGen)**, bridges the gap between general-purp
25
  * **Performance:** The model demonstrates siginificant improvement on complex Level-2 kernels compared to baselines, and effectively solving tasks where general-purpose models (like Qwen3, Llama3.1) fail completely.
26
 
27
  ## Citation
 
28
  @article{cao2026ascendkernelgen,
29
  title={AscendKernelGen: A Systematic Study of LLM-Based Kernel Generation for Neural Processing Units},
30
  author={Xinzi Cao and Jianyang Zhai and Pengfei Li and Zhiheng Hu and Cen Yan and Bingxu Mu and Guanghuan Fang and Bin She and Jiayu Li and Yihan Su and Dongyang Tao and Xiansong Huang and Fan Xu and Feidiao Yang and Yao Lu and Chang-Dong Wang and Yutong Lu and Weicheng Xue and Bin Zhou and Yonghong Tian},
@@ -32,3 +36,4 @@ Our framework, **AscendKernelGen (AKGen)**, bridges the gap between general-purp
32
  year={2026},
33
  url=https://arxiv.org/abs/2601.07160
34
  }
 
 
1
  ---
2
  language:
3
  - en
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
  ---
8
+
9
  # AscendKernelGen/KernelGen-LM-32B
10
 
11
  ![License](https://img.shields.io/badge/License-Apache-yellow)
12
  [![arXiv](https://img.shields.io/badge/arXiv-2601.07160-b31b1b.svg)](https://arxiv.org/abs/2601.07160)
13
 
14
+ KernelGen-LM-32B is a state-of-the-art domain-adaptive large language model specialized for low-level NPU kernel generation, specifically for the Huawei Ascend architecture using the AscendC programming language. Built upon the Qwen3-32B backbone, it is trained on the Ascend-CoT dataset and refined via reinforcement learning with execution feedback. It achieves unprecedented success rates in generating complex, functional hardware kernels, improving compilation success on L2 tasks from 0% (baseline) to 96.5% (Pass@10), while functional correctness achieves 40.5% compared to the baseline’s complete failure.
 
15
 
16
+ **Links:**
17
+ * **Paper:** [AscendKernelGen: A Systematic Study of LLM-Based Kernel Generation for Neural Processing Units](https://huggingface.co/papers/2601.07160)
18
+ * **Code:** [GitHub - weich97/NPUKernelBench](https://github.com/weich97/NPUKernelBench)
19
+ * **Evaluation Framework:** [NPUKernelBench (OpenI)](https://git.openi.org.cn/PCL-Benchmark/NPUKernelBench)
20
 
21
  ## Introduction
22
 
 
28
  * **Performance:** The model demonstrates siginificant improvement on complex Level-2 kernels compared to baselines, and effectively solving tasks where general-purpose models (like Qwen3, Llama3.1) fail completely.
29
 
30
  ## Citation
31
+ ```bibtex
32
  @article{cao2026ascendkernelgen,
33
  title={AscendKernelGen: A Systematic Study of LLM-Based Kernel Generation for Neural Processing Units},
34
  author={Xinzi Cao and Jianyang Zhai and Pengfei Li and Zhiheng Hu and Cen Yan and Bingxu Mu and Guanghuan Fang and Bin She and Jiayu Li and Yihan Su and Dongyang Tao and Xiansong Huang and Fan Xu and Feidiao Yang and Yao Lu and Chang-Dong Wang and Yutong Lu and Weicheng Xue and Bin Zhou and Yonghong Tian},
 
36
  year={2026},
37
  url=https://arxiv.org/abs/2601.07160
38
  }
39
+ ```