Papers
arxiv:2601.19100

Reward Engineering for Reinforcement Learning in Software Tasks

Published on Jan 27
Authors:
,
,
,

Abstract

Reinforcement learning reward design for software tasks involves complex, multi-dimensional approaches that differ from traditional numerical objectives, requiring proxy-based evaluation methods.

AI-generated summary

Reinforcement learning is increasingly used for code-centric tasks. These tasks include code generation, summarization, understanding, repair, testing, and optimization. This trend is growing faster with large language models and autonomous agents. A key challenge is how to design reward signals that make sense for software. In many RL problems, the reward is a clear number. In software, this is often not possible. The goal is rarely a single numeric objective. Instead, rewards are usually proxies. Common proxies check if the code compiles, passes tests, or satisfies quality metrics. Many reward designs have been proposed for code-related tasks. However, the work is scattered across areas and papers. There is no single survey that brings these approaches together and shows the full landscape of reward design for RL in software. In this survey, we provide the first systematic and comprehensive review of reward engineering for RL in software tasks. We focus on existing methods and techniques. We structure the literature along three complementary dimensions, summarizing the reward-design choices within each. We conclude with challenges and recommendations in the reward design space for SE tasks.

Community

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.19100 in a dataset README.md to link it from this page.

Spaces citing this paper 11

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.