Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch
Model Card for ScaleQuest-Qwen2-Math-7B-QGen
We introduce ScaleQuest, a scalable and novel data synthesis method that utilizes small-size open-source models to generate questions from scratch without the need for seed data with complex augmentation constraints.
Datasets & Models
Math Dataset: link
We release two question generator models and four problem-solving models.
| Model |
Type |
MATH |
Olympiad Bench |
🤗 HuggingFace Download Link |
| ScaleQuest-DeepSeekMath-7B-QGen |
question generator |
- |
- |
link |
| ScaleQuest-Qwen2-Math-7B-QGen |
question generator |
- |
- |
link |
| Mistral-7B-ScaleQuest |
problem solver |
62.9 |
26.8 |
link |
| Llama3-8B-ScaleQuest |
problem solver |
64.4 |
25.3 |
link |
| DeepSeekMath-7B-ScaleQuest |
problem solver |
66.6 |
29.9 |
link |
| Qwen2-Math-7B-ScaleQuest |
problem solver |
73.4 |
38.5 |
link |
Demo usage
Below is an example using ScaleQuest-Qwen2-Math-7B-QGen
from vllm import LLM, SamplingParams
model_name = "dyyyyyyyy/ScaleQuest-Qwen2-Math-7B-QGen"
pre_query_template = "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n"
stop_tokens = ["<|im_start|>", "<|im_end|>", "<|endoftext|>"]
llm = LLM(
model=model_name,
tokenizer=model_name,
tensor_parallel_size=1,
max_model_len=4096,
enable_prefix_caching=True,
trust_remote_code=True,
swap_space=16,
gpu_memory_utilization=0.95,
)
sampling_params = SamplingParams(
n=4,
max_tokens=1024,
temperature=1.0,
top_p=0.99,
stop=stop_tokens,
)
outputs = llm.generate(pre_query_template, sampling_params)
for output in outputs:
prompt = output.prompt
for idx, generated_output in enumerate(output.outputs):
generated_text = generated_output.text
print(f"Sample {idx + 1}:")
print(f"Prompt: {prompt!r}")
print(f"Generated text: {generated_text!r}")
print("-" * 50)
Citation
@article{ding2024unleashing,
title={Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch},
author={Ding, Yuyang and Shi, Xinyu and Liang, Xiaobo and Li, Juntao and Zhu, Qiaoming and Zhang, Min},
journal={https://arxiv.org/abs/2410.18693},
year={2024}
}