zwpride-iquestlab commited on
Commit
c93fcc5
·
verified ·
1 Parent(s): 1246ed6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -61,7 +61,7 @@ For the IQuest-Coder-V1-Thinking: We suggest using Temperature=1.0, TopP=0.95, T
61
 
62
  IQuest-Coder-V1 is a new family of code large language models (LLMs) designed to advance autonomous software engineering and code intelligence. Built on the innovative code-flow multi-stage training paradigm, IQuest-Coder-V1 captures the dynamic evolution of software logic, delivering state-of-the-art performance across critical dimensions:
63
 
64
- - **State-of-the-Art Performance**: Achieves leading results on SWE-Bench Verified, BigCodeBench, LiveCodeBench v6, and other major coding benchmarks, surpassing competitive models across agentic software engineering, competitive programming, and complex tool use.
65
  - **Code-Flow Training Paradigm**: Moving beyond static code representations, our models learn from repository evolution patterns, commit transitions, and dynamic code transformations to understand real-world software development processes.
66
  - **Dual Specialization Paths**: Bifurcated post-training delivers two specialized variants—Thinking models (utilizing reasoning-driven RL for complex problem-solving) and Instruct models (optimized for general coding assistance and instruction-following).
67
  - **Efficient Architecture**: The IQuest-Coder-V1-Loop variant introduces a recurrent mechanism that optimizes the trade-off between model capacity and deployment footprint. The 7B and 14B models adopt shallow architectures for faster inference speed.
@@ -205,7 +205,7 @@ claude --model IQuestCoder-V1-7B-Instruct
205
  | **BigCodeBench** | 0.0 | - |
206
  | **FullStackBench** | 0.0 | - |
207
  | **CruxEval** | 0.0 | - |
208
- | **LiveCodeBench** | 1.0 | 1.0 |
209
  | **Aider-Polyglot** | 0.95 | 0.85 |
210
  | **Mercury** | 0.2 | 0.85 |
211
  | **Bird** | 0.2 | 0.95 |
 
61
 
62
  IQuest-Coder-V1 is a new family of code large language models (LLMs) designed to advance autonomous software engineering and code intelligence. Built on the innovative code-flow multi-stage training paradigm, IQuest-Coder-V1 captures the dynamic evolution of software logic, delivering state-of-the-art performance across critical dimensions:
63
 
64
+ - **Performance**: Achieves leading results on SWE-Bench Verified (76.2%), BigCodeBench (49.9%), LiveCodeBench v6 (81.1%), and other major coding benchmarks, surpassing competitive models across agentic software engineering, competitive programming, and complex tool use.
65
  - **Code-Flow Training Paradigm**: Moving beyond static code representations, our models learn from repository evolution patterns, commit transitions, and dynamic code transformations to understand real-world software development processes.
66
  - **Dual Specialization Paths**: Bifurcated post-training delivers two specialized variants—Thinking models (utilizing reasoning-driven RL for complex problem-solving) and Instruct models (optimized for general coding assistance and instruction-following).
67
  - **Efficient Architecture**: The IQuest-Coder-V1-Loop variant introduces a recurrent mechanism that optimizes the trade-off between model capacity and deployment footprint. The 7B and 14B models adopt shallow architectures for faster inference speed.
 
205
  | **BigCodeBench** | 0.0 | - |
206
  | **FullStackBench** | 0.0 | - |
207
  | **CruxEval** | 0.0 | - |
208
+ | **LiveCodeBench** | 0.6 | 0.95 |
209
  | **Aider-Polyglot** | 0.95 | 0.85 |
210
  | **Mercury** | 0.2 | 0.85 |
211
  | **Bird** | 0.2 | 0.95 |