mobiledev-bench / README.md
thefabdev's picture
Update README
1ab1866 verified
metadata
license: cc-by-4.0
task_categories:
  - text-generation
language:
  - en
tags:
  - code
  - android
  - mobile
  - benchmark
  - swe-bench
  - kotlin
  - dart
  - flutter
  - react-native
  - issue-resolution
pretty_name: MobileDev-Bench
size_categories:
  - n<1K
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test.jsonl

MobileDev-Bench

MobileDev-Bench is a benchmark of 415 verified real-world issue-resolution tasks collected from 19 production mobile app repositories spanning Android Native (Java/Kotlin), React Native (TypeScript), and Flutter (Dart). It evaluates the ability of large language models to resolve authentic developer-reported issues in mobile applications through execution-based validation.

Paper: MobileDev-Bench: A Benchmark for Issue Resolution in Mobile Application Development

GitHub: mobiledev-bench (evaluation harness, scripts, Docker environments)


📊 Key Facts

Tasks 415 verified instances
Repositories 19 production mobile apps
Frameworks Android Native · React Native · Flutter
Languages Kotlin · Java · TypeScript · Dart
Difficulty tiers Easy (139) · Medium (140) · Hard (136)
Avg. files changed 13.5
Avg. lines changed 344.2
Multi-artifact fixes 42.2% of instances
Human verified ✅ All instances manually verified

📖 Dataset Description

Motivation

Existing issue-resolution benchmarks (SWE-bench and its extensions) target general-purpose Python/JavaScript libraries. Mobile app development imposes distinct engineering constraints: platform-defined lifecycles, framework-specific build systems (Gradle, pub, npm), and correctness that depends on coordinated changes across heterogeneous artifacts — source files, manifests, resource definitions, and configuration metadata. MobileDev-Bench is the first benchmark to evaluate LLMs on these challenges at scale.

Construction

Each instance was derived from a merged GitHub pull request that resolves at least one linked issue. The pipeline:

  1. Crawl — Pull requests and linked issues collected from 19 repositories (≥400 GitHub stars each).
  2. Filter — Instances with no linked issue, no test changes, or non-compilable base commits are excluded.
  3. Verify — Test execution under three configurations (base commit, test patch only, full fix) confirms that the fix produces a genuine Fail-to-Pass or None-to-Pass transition.
  4. Annotate — Human annotators assign difficulty tiers (Easy / Medium / Hard) and label artifact types involved in each fix.

Splits

Split Instances Description
test 415 Verified benchmark instances with full evaluation metadata (fix_patch, test_patch, test_command, test results)

🗂️ Field Descriptions

Field Type Description
instance_id string Unique identifier: {org}__{repo}-{number}
org string GitHub organisation (e.g. thunderbird)
repo string GitHub repository name (e.g. thunderbird-android)
number int Pull request number
state string PR state at collection time (closed)
title string Pull request title
body string Pull request description
base string JSON object with base branch label, ref, and SHA
resolved_issues string JSON array of linked issue objects (number, title)
lang string Primary language: kotlin, java, dart, or typescript
problem_statement string Concatenated issue text — the task description shown to the model
hints string Concatenated issue comments providing additional context
pull_url string Full GitHub URL of the pull request
issue_urls string JSON array of linked issue URLs
fix_patch string Gold unified diff of the fix (applied to base commit)
test_patch string Unified diff adding or modifying tests that validate the fix
test_command string Shell command to execute the relevant test suite
fixed_tests string JSON list of test names that transition from Fail/None to Pass
f2p_tests string JSON list of Fail-to-Pass test names
p2p_tests string JSON list of Pass-to-Pass test names
s2p_tests string JSON list of Skip-to-Pass test names
n2p_tests string JSON list of None-to-Pass test names
run_result string Outcome of test execution on the base commit
test_patch_result string Outcome with test patch only (no fix)
fix_patch_result string Outcome with both test patch and fix applied

🏠 Source Repositories

Repository Language Link
zulip/zulip-flutter Dart GitHub
PalisadoesFoundation/talawa Dart GitHub
AntennaPod/AntennaPod Java GitHub
Futsch1/medTimer Java GitHub
thunderbird/thunderbird-android Kotlin GitHub
wordpress-mobile/wordpress-android Kotlin GitHub
element-hq/element-x-android Kotlin GitHub
streetcomplete/StreetComplete Kotlin GitHub
tuskyapp/Tusky Kotlin GitHub
commons-app/apps-android-commons Kotlin GitHub
openhab/openhab-android Kotlin GitHub
LemmyNet/jerboa Kotlin GitHub
PaulWoitaschek/Voice Kotlin GitHub
mjaakko/NeoStumbler Kotlin GitHub
JackEblan/Geto Kotlin GitHub
NMF-earth/nmf-app TypeScript GitHub
Expensify/App TypeScript GitHub
RocketChat/Rocket.Chat.ReactNative TypeScript GitHub
artsy/eigen TypeScript GitHub

🚀 Usage

from datasets import load_dataset

ds = load_dataset("MobileDev-Bench/mobiledev-bench", split="test")

Or clone the repository directly:

git lfs install
git clone https://huggingface.co/datasets/MobileDev-Bench/mobiledev-bench