agent-diff-bench / README.md
hubertmarek's picture
Add paper link, GitHub repository, and improve dataset card description (#2)
4a96ea9
metadata
license: mit
task_categories:
  - text-generation
tags:
  - agents
  - tool-use
  - benchmark
  - enterprise-api
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: test_id
      dtype: string
    - name: test_name
      dtype: string
    - name: service
      dtype: string
    - name: task_horizon
      dtype: int64
    - name: operation_type
      dtype: string
    - name: entity_scope
      dtype: string
    - name: information_availability
      dtype: string
    - name: prompt_ambiguity
      dtype: string
    - name: info
      dtype: string
  splits:
    - name: train
      num_bytes: 256049
      num_examples: 179
    - name: test
      num_bytes: 74705
      num_examples: 45
  download_size: 124036
  dataset_size: 330754

Agent-Diff Bench

Website | Paper | GitHub

Agent-Diff is a benchmarking framework for evaluating agentic Large Language Models (LLMs) on real-world tasks that execute code via external APIs. The benchmark provides access to real API interfaces (Slack, Box, Linear, Google Calendar) while sandboxing the environment in which calls are made and evaluated.

Dataset Summary

The dataset contains 224 tasks utilizing enterprise software workflows, provided with an 80/20 train/test split. It introduces a state-diff contract, which separates process from outcome — task success is defined as whether the expected change in environment state was achieved, rather than fuzzy trace or parameter matching.

  • Services: Slack, Linear, Box, Google Calendar.
  • Evaluation: State-diff based (comparing "before" and "after" snapshots of the sandboxed environment).

Sample Usage

The following example demonstrates how to run evaluations using the agent-diff SDK as found in the GitHub repository:

from agent_diff import AgentDiff, PythonExecutorProxy, create_openai_tool
from agents import Agent, Runner

client = AgentDiff()

# List test suites (e.g., "Slack Bench")
suite_list = client.list_test_suites(name="Slack Bench")
slack_suite = suite_list.testSuites[0]
suite = client.get_test_suite(slack_suite.id, expand=True)

for test in suite.tests:
    prompt = test.prompt
    test_id = test.id

    # Initialise isolated environment
    env = client.init_env(testId=test_id)

    # Start the run (takes a snapshot before execution)
    run = client.start_run(envId=env.environmentId, testId=test_id)

    # Setup agent with proxied code execution tool
    python_executor = PythonExecutorProxy(env.environmentId)
    python_tool = create_openai_tool(python_executor)

    agent = Agent(
        name="Slack Assistant",
        instructions="Use execute_python tool to interact with Slack API. Authentication is handled automatically.",
        tools=[python_tool]
    )

    # Run the agent on the task
    response = await Runner.run(agent, prompt)

    # Compute evaluation based on state-diff
    client.evaluate_run(runId=run.runId)
    run_result = client.get_results_for_run(runId=run.runId)

    print(f"Test: {test_id}, Score: {run_result.score}")

    # Clean up
    client.delete_env(envId=env.environmentId)

Citation

@article{pysklo2025agentdiff,
  title={Agent-Diff: Benchmarking LLM Agents on Enterprise API Tasks via Code Execution with State-Diff-Based Evaluation},
  author={Hubert Marek Pysklo and others},
  journal={arXiv preprint arXiv:2602.11224},
  year={2025}
}