Understanding by Reconstruction: Reversing the Software Development Process for LLM Pretraining
Abstract
Large language models trained on reconstructed agent trajectories from multi-agent simulations show improved performance in long-context understanding, coding proficiency, and agentic capabilities.
While Large Language Models (LLMs) have achieved remarkable success in code generation, they often struggle with the deep, long-horizon reasoning required for complex software engineering. We attribute this limitation to the nature of standard pre-training data: static software repositories represent only the terminal state of an intricate intellectual process, abstracting away the intermediate planning, debugging, and iterative refinement. To bridge this gap, we propose a novel paradigm: understanding via reconstruction. We hypothesize that reverse-engineering the latent agentic trajectories -- the planning, reasoning, and debugging steps -- behind static repositories provides a far richer supervision signal than raw code alone. To operationalize this, we introduce a framework that synthesizes these trajectories using a multi-agent simulation. This process is grounded in the structural realities of the source repositories (e.g., dependency graphs and file hierarchies) to ensure fidelity. Furthermore, to guarantee the logical rigor of the synthetic data, we employ a search-based optimization technique that iteratively refines the Chain-of-Thought (CoT) reasoning to maximize the likelihood of the ground-truth code. Empirical results demonstrate that continuous pre-training on these reconstructed trajectories significantly enhances Llama-3-8B's performance across diverse benchmarks, including long-context understanding, coding proficiency, and agentic capabilities.
Community
good paper!, any plan releasing the workflow?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Agentic Proposing: Enhancing Large Language Model Reasoning via Compositional Skill Synthesis (2026)
- daVinci-Dev: Agent-native Mid-training for Software Engineering (2026)
- daVinci-Agency: Unlocking Long-Horizon Agency Data-Efficiently (2026)
- Pull Requests as a Training Signal for Repo-Level Code Editing (2026)
- Unseen-Codebases-Domain Data Synthesis and Training Based on Code Graphs (2026)
- Outcome-Conditioned Reasoning Distillation for Resolving Software Issues (2026)
- Chart Specification: Structural Representations for Incentivizing VLM Reasoning in Chart-to-Code Generation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper