Update dataset
Browse files- README.md +2 -2
- data/test-00000-of-00001.json +2 -0
README.md
CHANGED
|
@@ -19,9 +19,9 @@ A benchmark dataset for evaluating AI systems on challenging computer science pr
|
|
| 19 |
|
| 20 |
## Dataset Description
|
| 21 |
|
| 22 |
-
This dataset contains
|
| 23 |
- **Algorithmic**: 172 competitive programming problems with automated judging
|
| 24 |
-
- **Research**:
|
| 25 |
|
| 26 |
## Dataset Structure
|
| 27 |
|
|
|
|
| 19 |
|
| 20 |
## Dataset Description
|
| 21 |
|
| 22 |
+
This dataset contains 238 problems across two categories:
|
| 23 |
- **Algorithmic**: 172 competitive programming problems with automated judging
|
| 24 |
+
- **Research**: 66 open-ended research problems
|
| 25 |
|
| 26 |
## Dataset Structure
|
| 27 |
|
data/test-00000-of-00001.json
CHANGED
|
@@ -214,6 +214,8 @@
|
|
| 214 |
{"problem_id": "llm_sql/small", "category": "research", "statement": "Problem Setting\n---------------\n\nConsider a CSV file with $N$ rows and $M$ columns, where $M \\leq 10$. We feed each row to an LLM inference engine (with a prefix KV cache) by concatenating all column values in that row. For the $i$-th row with entries $A[i,1], A[i,2], \\ldots, A[i,M]$, we construct the input string:\n\n```math\nS_i = \\text{Concat}(\\text{string}(A[i,1]), \\text{string}(A[i,2]), \\ldots, \\text{string}(A[i,M]))\n````\n\nWhen requesting $S_i$ for $i > 1$, the prefix KV-cache hit rate depends on the longest common prefix with any previously seen request:\n\n```math\n\\text{hit\\_rate}_i = \n\\frac{\\max_{1 \\le j < i} \\text{LCP}(S_i, S_j)}{|S_i|}\n```\n\nwhere $LCP(S, T)$ is the length of the longest common prefix between strings $S$ and $T$.\n\nYou are allowed to reorder the CSV columns. Let $p$ be a permutation of $\\{1, 2, ..., M\\}$. The reordered string for row $i$ becomes:\n\n```math\nS'_i = \\text{Concat}(\\text{string}(A[i,p_1]), \\text{string}(A[i,p_2]), \\ldots, \\text{string}(A[i,p_M]))\n```\n\nThe goal is to choose a permutation $p$ that maximizes the overall KV-cache hit rate:\n\n```math\n\\max_p\\;\n\\frac{\\sum_{i=2}^N \\max_{1 \\le j < i} \\text{LCP}(S'_i, S'_j)}\n {\\sum_{i=1}^N |S'_i|}\n```\n\n\n\n\nTarget\n---\nMaximize prefix hit rate shown above (higher is better)\n\n- **Hard Constraint**: Average runtime per dataset must be $\\leq 10$ seconds (score = 0 if exceeded) and correctly handle column merge constraint.\n\n**Column Merges**:\n- Column merge specs are provided per dataset\n- Columns in each merge group are concatenated into a single column\n- The merged column replaces the original columns\n- Merge operations are applied before column reordering\n\nAPI Specification\n---\nImplement a `Solution` class:\n\n```python\nimport pandas as pd\n\nclass Solution:\n def solve(\n self,\n df: pd.DataFrame,\n early_stop: int = 100000,\n row_stop: int = 4,\n col_stop: int = 2,\n col_merge: list = None,\n one_way_dep: list = None,\n distinct_value_threshold: float = 0.7,\n parallel: bool = True,\n ) -> pd.DataFrame:\n \"\"\"\n Reorder columns in the DataFrame to maximize prefix hit rate.\n \n Args:\n df: Input DataFrame to optimize\n early_stop: Early stopping parameter (default: 100000)\n row_stop: Row stopping parameter (default: 4)\n col_stop: Column stopping parameter (default: 2)\n col_merge: List of column groups to merge (columns in each group are merged into one)\n one_way_dep: List of one-way dependencies (not used in this variant)\n distinct_value_threshold: Threshold for distinct values (default: 0.7)\n parallel: Whether to use parallel processing (default: True)\n \n Returns:\n DataFrame with reordered columns (same rows, different column order)\n \"\"\"\n # Your implementation\n pass\n```\n\n**Evaluation Process**:\n1. Column merges are applied if specified\n2. Your `solve()` method reorders the remaining columns\n3. Rows are concatenated (no spaces) and prefix hit rate is calculated\n\nScoring (0-100)\n---\n\nbaseline_hit_rate = Average prefix hit rate using original column order (0-point anchor)\navg_hit_rate = Your solution's average prefix hit rate across all datasets\n\nFor each dataset:\n dataset_score = ((hit_rate - baseline_hit_rate) / (1.0 - baseline_hit_rate)) × 100\n\nfinal_score = Average of individual dataset scores\n\nScore is clamped to [0, 100] range\n\n\n**Runtime Constraint**:\n- Average runtime per dataset must be ≤ 10 seconds\n- If average runtime exceeds 10 seconds, score = 0.0\n\n**Scoring Examples**:\n- baseline_hit_rate = 0.0 (worst), avg_hit_rate = 1.0 (perfect) → Score = 100\n- baseline_hit_rate = 0.5, avg_hit_rate = 0.5 → Score = 0\n- baseline_hit_rate = 0.5, avg_hit_rate = 0.75 → Score = 50\n- baseline_hit_rate = 0.5, avg_hit_rate = 1.0 → Score = 100\n\nImplementation Notes\n---\n- Row values are concatenated without spaces: `\"\".join(row.values)`\n- Column reordering should optimize for maximum prefix overlap in the concatenated string representation\n- Consider column dependencies, distinct value distributions, and merge requirements when reordering\n- Large datasets with $M > 10$ columns require efficient algorithms due to larger search space\n- In our smaller dataset, $15k \\leq N \\leq 28k$ and $4 \\leq M \\leq 9$ \n\n**Example input**\nplease ignore the $> 10$ column number here\n---\n```csv\nID,LIMIT_BAL,SEX,EDUCATION,MARRIAGE,AGE,PAY_0,PAY_2,PAY_3,PAY_4,PAY_5,PAY_6,BILL_AMT1,BILL_AMT2,BILL_AMT3,BILL_AMT4,BILL_AMT5,BILL_AMT6,PAY_AMT1,PAY_AMT2,PAY_AMT3,PAY_AMT4,PAY_AMT5,PAY_AMT6,default payment next month\n1,20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0,1\n2,120000,2,2,2,26,-1,2,0,0,0,2,2682,1725,2682,3272,3455,3261,0,1000,1000,1000,0,2000,1\n3,90000,2,2,2,34,0,0,0,0,0,0,29239,14027,13559,14331,14948,15549,1518,1500,1000,1000,1000,5000,0\n4,50000,2,2,1,37,0,0,0,0,0,0,46990,48233,49291,28314,28959,29547,2000,2019,1200,1100,1069,1000,0\n5,50000,1,2,1,57,-1,0,-1,0,0,0,8617,5670,35835,20940,19146,19131,2000,36681,10000,9000,689,679,0\n...\n```\n\n**Example output**\n---\n```\nID,LIMIT_BAL,SEX,EDUCATION,MARRIAGE,AGE,PAY_0,PAY_2,PAY_3,PAY_4,PAY_5,PAY_6,BILL_AMT1,BILL_AMT2,BILL_AMT3,BILL_AMT4,BILL_AMT5,BILL_AMT6,PAY_AMT1,PAY_AMT2,PAY_AMT3,PAY_AMT4,PAY_AMT5,PAY_AMT6,default payment next month\n1,20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0,1\n2,120000,2,2,2,26,-1,2,0,0,0,2,2682,1725,2682,3272,3455,3261,0,1000,1000,1000,0,2000,1\n3,90000,2,2,2,34,0,0,0,0,0,0,29239,14027,13559,14331,14948,15549,1518,1500,1000,1000,1000,5000,0\n4,50000,2,2,1,37,0,0,0,0,0,0,46990,48233,49291,28314,28959,29547,2000,2019,1200,1100,1069,1000,0\n5,50000,1,2,1,57,-1,0,-1,0,0,0,8617,5670,35835,20940,19146,19131,2000,36681,10000,9000,689,679,0\n```\n($p$ = $1, 2, \\ldots M$)\n", "config": "{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [],\n \"runtime\": {\n \"timeout_seconds\": 1800\n },\n \"tag\": \"db\"\n}\n"}
|
| 215 |
{"problem_id": "mamba2_scan", "category": "research", "statement": "Mamba2 Scan Optimization Problem\n==================================\n\nProblem Setting\n---------------\nDesign and optimize high-performance Triton kernels for Mamba2 scan computation on GPU. This problem focuses on implementing efficient sequential scan operations using chunked parallelism with Triton's JIT compilation system.\n\nThe challenge involves optimizing:\n- **Sequential scan computation**: Efficient computation of y_t = a_t * y_{t-1} + b_t * x_t\n- **Chunked parallelism**: Processing sequences in chunks to enable parallelism while maintaining correctness\n- **State management**: Efficiently managing and propagating state between chunks\n- **Memory access patterns**: Efficient loading and storing of X, A, B tensors and state\n- **Block tiling**: Optimal block sizes for GPU execution across different sequence lengths\n- **Performance benchmarking**: Achieving speedup over baseline PyTorch implementations\n\nTarget\n------\n- **Primary**: Maximize geometric mean speedup over baseline (higher is better)\n- **Secondary**: Ensure correctness across diverse sequence lengths and feature dimensions\n- **Tertiary**: Minimize kernel launch overhead and memory usage\n\nAPI Specification\n-----------------\nImplement a `Solution` class that returns a Triton kernel implementation:\n\n```python\nclass Solution:\n def solve(self, spec_path: str = None) -> dict:\n \"\"\"\n Returns a dict with either:\n - {\"code\": \"python_code_string\"}\n - {\"program_path\": \"path/to/kernel.py\"}\n \"\"\"\n # Your implementation\n pass\n```\n\nYour kernel implementation must provide:\n\n```python\nimport torch\nimport triton\nimport triton.language as tl\n\ndef chunk_scan(X: torch.Tensor, A: torch.Tensor, B: torch.Tensor, chunk: int = 128, BD: int = 128) -> torch.Tensor:\n \"\"\"\n Mamba2 chunked scan computation.\n \n Args:\n X: Input tensor of shape (L, D) - input sequence (float16)\n A: Input tensor of shape (L, D) - decay factors (float16)\n B: Input tensor of shape (L, D) - input weights (float16)\n chunk: Chunk size for parallel processing (default 128)\n BD: Block dimension for feature dimension tiling (default 128)\n \n Returns:\n Output tensor of shape (L, D) - scan output (float16)\n \"\"\"\n # Your implementation\n pass\n```\n\nInput Specifications\n--------------------\n- **X**: Input tensor of shape `(L, D)` where:\n - `L`: Sequence length (tested with 2048, 4096)\n - `D`: Feature dimension (typically 512)\n- **A**: Decay factor tensor of shape `(L, D)` (float16, typically |A| < 0.5)\n- **B**: Input weight tensor of shape `(L, D)` (float16)\n- All inputs are `torch.float16` and on CUDA device\n- `chunk`: Chunk size for parallel processing (default 128)\n- `BD`: Block dimension for feature dimension tiling (default 128)\n- **Constraint**: L must be divisible by chunk\n\nOutput Specifications\n--------------------\n- Output tensor of shape `(L, D)` matching the input dimensions\n- Output dtype: `torch.float16`\n- Output device: Same as input (CUDA)\n\nCorrectness Requirements\n------------------------\n- Numerical correctness verified against PyTorch baseline implementation\n- Relative tolerance: 1e-2, Absolute tolerance: 5e-3\n- All test cases must pass for any score above 0\n- Sequential dependency must be correctly maintained: y_t = a_t * y_{t-1} + b_t * x_t\n\nScoring (0-100)\n---------------\nPerformance is measured against GPU baseline implementations:\n\n```\ngeometric_mean_gpu_time = geometric_mean(gpu_baseline_times)\ngeometric_mean_answer_time = geometric_mean(answer_times)\n\n# Linear interpolation: 0 points = 1x GPU baseline, 100 points = 200x GPU baseline\ntarget_time_0 = geometric_mean_gpu_time # 0 points (1x GPU baseline)\ntarget_time_100 = geometric_mean_gpu_time / 200.0 # 100 points (200x speedup over GPU)\nscore = 100 * (target_time_0 - geometric_mean_answer_time) / (target_time_0 - target_time_100)\n```\n\n- 0 points = 1x GPU baseline performance\n- 100 points = 200x speedup over GPU baseline\n- Score is linearly interpolated between these two points\n\nNote: Correctness is verified against GPU baseline, and scoring spans from 1x GPU baseline (0 points) to 200x GPU baseline (100 points).\n\nEvaluation Details\n------------------\n- Test cases: L = 2048, 4096 (with D = 512)\n- Warmup phase: 10 iterations to stabilize GPU clocks and caches\n- Random seed: Fixed seed (0) for reproducible data generation\n- Strict correctness: Any test failure results in score of 0\n- Chunk size: 128, BD: 128\n\nAdditional Notes\n----------------\n- The benchmark uses float32 for PyTorch baseline (for numerical stability) but float16 for answer evaluation\n- Sequential scan operation: y_t = a_t * y_{t-1} + b_t * x_t\n- Chunked parallelism: Process sequence in chunks, maintaining state between chunks\n- State propagation: State must be correctly propagated from one chunk to the next\n- Consider using block tiling along the feature dimension (BD) for parallelism\n\n", "config": "tag: hpc\ndependencies:\n uv_project: resources\nruntime:\n environment: \"Triton 3.2.0 with CUDA 12.2 (triton-tlx image)\"\n docker:\n image: andylizf/triton-tlx:tlx-nv-cu122\n gpu: true\n"}
|
| 216 |
{"problem_id": "mixed_gemm", "category": "research", "statement": "Mixed GEMM Optimization Problem\n=================================\n\nProblem Setting\n---------------\nDesign and optimize high-performance Triton kernels for Mixed GEMM (Linear + Bias + GELU) computation on GPU. This problem focuses on implementing efficient fused kernels that combine matrix multiplication, bias addition, and GELU activation using Triton's JIT compilation system.\n\nThe challenge involves optimizing:\n- **Fused computation**: Efficiently combining linear layer (X @ W + B) with GELU activation\n- **Memory access patterns**: Efficient loading and storing of X, W, B tensors\n- **Mixed precision**: Handling float16 inputs/outputs with float32 bias and accumulation\n- **GELU activation**: Implementing efficient GELU computation using CUDA libdevice functions\n- **Block tiling**: Optimal block sizes for GPU execution across different matrix sizes\n- **Performance benchmarking**: Achieving speedup over baseline PyTorch implementations\n\nTarget\n------\n- **Primary**: Maximize geometric mean speedup over baseline (higher is better)\n- **Secondary**: Ensure correctness across diverse matrix sizes\n- **Tertiary**: Minimize kernel launch overhead and memory usage\n\nAPI Specification\n-----------------\nImplement a `Solution` class that returns a Triton kernel implementation:\n\n```python\nclass Solution:\n def solve(self, spec_path: str = None) -> dict:\n \"\"\"\n Returns a dict with either:\n - {\"code\": \"python_code_string\"}\n - {\"program_path\": \"path/to/kernel.py\"}\n \"\"\"\n # Your implementation\n pass\n```\n\nYour kernel implementation must provide:\n\n```python\nimport torch\nimport triton\nimport triton.language as tl\n\ndef linear_gelu(X: torch.Tensor, W: torch.Tensor, B: torch.Tensor) -> torch.Tensor:\n \"\"\"\n Linear layer with GELU activation computation.\n \n Args:\n X: Input tensor of shape (M, K) - input features (float16)\n W: Weight tensor of shape (K, N) - weight matrix (float16)\n B: Bias tensor of shape (N,) - bias vector (float32)\n \n Returns:\n Output tensor of shape (M, N) - output with GELU activation (float16)\n \"\"\"\n # Your implementation\n pass\n```\n\nInput Specifications\n--------------------\n- **X**: Input tensor of shape `(M, K)` where:\n - `M`: Batch size (tested with 512, 1024)\n - `K`: Input feature dimension (typically 4096)\n - dtype: `torch.float16`\n- **W**: Weight tensor of shape `(K, N)` where:\n - `N`: Output feature dimension (typically 4096)\n - dtype: `torch.float16`\n- **B**: Bias tensor of shape `(N,)` where:\n - dtype: `torch.float32`\n- All inputs are on CUDA device\n\nOutput Specifications\n--------------------\n- Output tensor of shape `(M, N)` matching the input batch and output feature dimensions\n- Output dtype: `torch.float16`\n- Output device: Same as input (CUDA)\n\nCorrectness Requirements\n------------------------\n- Numerical correctness verified against PyTorch baseline implementation\n- Relative tolerance: 1e-2, Absolute tolerance: 5e-3\n- All test cases must pass for any score above 0\n- GELU activation must be correctly implemented\n\nScoring (0-100)\n---------------\nPerformance is measured against GPU baseline implementations:\n\n```\ngeometric_mean_gpu_time = geometric_mean(gpu_baseline_times)\ngeometric_mean_answer_time = geometric_mean(answer_times)\n\n# Linear interpolation: 0 points = 1x GPU baseline, 100 points = 3x GPU baseline\ntarget_time_0 = geometric_mean_gpu_time # 0 points (1x GPU baseline)\ntarget_time_100 = geometric_mean_gpu_time / 3.0 # 100 points (3x speedup over GPU)\nscore = 100 * (target_time_0 - geometric_mean_answer_time) / (target_time_0 - target_time_100)\n```\n\n- 0 points = 1x GPU baseline performance\n- 100 points = 3x speedup over GPU baseline\n- Score is linearly interpolated between these two points\n\nNote: Correctness is verified against GPU baseline, and scoring spans from 1x GPU baseline (0 points) to 3x GPU baseline (100 points).\n\nEvaluation Details\n------------------\n- Test cases: M = 512, 1024 (with N = 4096, K = 4096)\n- Warmup phase: 10 iterations to stabilize GPU clocks and caches\n- Random seed: Fixed seed (0) for reproducible data generation\n- Strict correctness: Any test failure results in score of 0\n\nAdditional Notes\n----------------\n- The benchmark uses float32 for PyTorch baseline (for numerical stability) but float16 for answer evaluation\n- GELU formula: gelu(x) = x * 0.5 * (1.0 + erf(x * 0.7071067811865476))\n- Consider using CUDA libdevice erf function: `tl.extra.cuda.libdevice.erf`\n- Accumulation should use float32 for numerical stability\n- Bias addition should be done after matrix multiplication but before GELU\n\n", "config": "dependencies:\n uv_project: resources\ntag: hpc\nruntime:\n environment: \"Triton 3.2.0 with CUDA 12.2 (triton-tlx image)\"\n docker:\n image: andylizf/triton-tlx:tlx-nv-cu122\n gpu: true\n"}
|
|
|
|
|
|
|
| 217 |
{"problem_id": "poc_generation/heap_buffer_overflow", "category": "research", "statement": "", "config": "{\"tag\": \"security\"}\n"}
|
| 218 |
{"problem_id": "poc_generation/heap_use_after_free", "category": "research", "statement": "", "config": "tag: security\n{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [\n \"arvo:47101\"\n ],\n \"tag\": \"security\"\n}\n"}
|
| 219 |
{"problem_id": "poc_generation/stack_buffer_overflow", "category": "research", "statement": "", "config": "{\"tag\": \"security\"}\n"}
|
|
|
|
| 214 |
{"problem_id": "llm_sql/small", "category": "research", "statement": "Problem Setting\n---------------\n\nConsider a CSV file with $N$ rows and $M$ columns, where $M \\leq 10$. We feed each row to an LLM inference engine (with a prefix KV cache) by concatenating all column values in that row. For the $i$-th row with entries $A[i,1], A[i,2], \\ldots, A[i,M]$, we construct the input string:\n\n```math\nS_i = \\text{Concat}(\\text{string}(A[i,1]), \\text{string}(A[i,2]), \\ldots, \\text{string}(A[i,M]))\n````\n\nWhen requesting $S_i$ for $i > 1$, the prefix KV-cache hit rate depends on the longest common prefix with any previously seen request:\n\n```math\n\\text{hit\\_rate}_i = \n\\frac{\\max_{1 \\le j < i} \\text{LCP}(S_i, S_j)}{|S_i|}\n```\n\nwhere $LCP(S, T)$ is the length of the longest common prefix between strings $S$ and $T$.\n\nYou are allowed to reorder the CSV columns. Let $p$ be a permutation of $\\{1, 2, ..., M\\}$. The reordered string for row $i$ becomes:\n\n```math\nS'_i = \\text{Concat}(\\text{string}(A[i,p_1]), \\text{string}(A[i,p_2]), \\ldots, \\text{string}(A[i,p_M]))\n```\n\nThe goal is to choose a permutation $p$ that maximizes the overall KV-cache hit rate:\n\n```math\n\\max_p\\;\n\\frac{\\sum_{i=2}^N \\max_{1 \\le j < i} \\text{LCP}(S'_i, S'_j)}\n {\\sum_{i=1}^N |S'_i|}\n```\n\n\n\n\nTarget\n---\nMaximize prefix hit rate shown above (higher is better)\n\n- **Hard Constraint**: Average runtime per dataset must be $\\leq 10$ seconds (score = 0 if exceeded) and correctly handle column merge constraint.\n\n**Column Merges**:\n- Column merge specs are provided per dataset\n- Columns in each merge group are concatenated into a single column\n- The merged column replaces the original columns\n- Merge operations are applied before column reordering\n\nAPI Specification\n---\nImplement a `Solution` class:\n\n```python\nimport pandas as pd\n\nclass Solution:\n def solve(\n self,\n df: pd.DataFrame,\n early_stop: int = 100000,\n row_stop: int = 4,\n col_stop: int = 2,\n col_merge: list = None,\n one_way_dep: list = None,\n distinct_value_threshold: float = 0.7,\n parallel: bool = True,\n ) -> pd.DataFrame:\n \"\"\"\n Reorder columns in the DataFrame to maximize prefix hit rate.\n \n Args:\n df: Input DataFrame to optimize\n early_stop: Early stopping parameter (default: 100000)\n row_stop: Row stopping parameter (default: 4)\n col_stop: Column stopping parameter (default: 2)\n col_merge: List of column groups to merge (columns in each group are merged into one)\n one_way_dep: List of one-way dependencies (not used in this variant)\n distinct_value_threshold: Threshold for distinct values (default: 0.7)\n parallel: Whether to use parallel processing (default: True)\n \n Returns:\n DataFrame with reordered columns (same rows, different column order)\n \"\"\"\n # Your implementation\n pass\n```\n\n**Evaluation Process**:\n1. Column merges are applied if specified\n2. Your `solve()` method reorders the remaining columns\n3. Rows are concatenated (no spaces) and prefix hit rate is calculated\n\nScoring (0-100)\n---\n\nbaseline_hit_rate = Average prefix hit rate using original column order (0-point anchor)\navg_hit_rate = Your solution's average prefix hit rate across all datasets\n\nFor each dataset:\n dataset_score = ((hit_rate - baseline_hit_rate) / (1.0 - baseline_hit_rate)) × 100\n\nfinal_score = Average of individual dataset scores\n\nScore is clamped to [0, 100] range\n\n\n**Runtime Constraint**:\n- Average runtime per dataset must be ≤ 10 seconds\n- If average runtime exceeds 10 seconds, score = 0.0\n\n**Scoring Examples**:\n- baseline_hit_rate = 0.0 (worst), avg_hit_rate = 1.0 (perfect) → Score = 100\n- baseline_hit_rate = 0.5, avg_hit_rate = 0.5 → Score = 0\n- baseline_hit_rate = 0.5, avg_hit_rate = 0.75 → Score = 50\n- baseline_hit_rate = 0.5, avg_hit_rate = 1.0 → Score = 100\n\nImplementation Notes\n---\n- Row values are concatenated without spaces: `\"\".join(row.values)`\n- Column reordering should optimize for maximum prefix overlap in the concatenated string representation\n- Consider column dependencies, distinct value distributions, and merge requirements when reordering\n- Large datasets with $M > 10$ columns require efficient algorithms due to larger search space\n- In our smaller dataset, $15k \\leq N \\leq 28k$ and $4 \\leq M \\leq 9$ \n\n**Example input**\nplease ignore the $> 10$ column number here\n---\n```csv\nID,LIMIT_BAL,SEX,EDUCATION,MARRIAGE,AGE,PAY_0,PAY_2,PAY_3,PAY_4,PAY_5,PAY_6,BILL_AMT1,BILL_AMT2,BILL_AMT3,BILL_AMT4,BILL_AMT5,BILL_AMT6,PAY_AMT1,PAY_AMT2,PAY_AMT3,PAY_AMT4,PAY_AMT5,PAY_AMT6,default payment next month\n1,20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0,1\n2,120000,2,2,2,26,-1,2,0,0,0,2,2682,1725,2682,3272,3455,3261,0,1000,1000,1000,0,2000,1\n3,90000,2,2,2,34,0,0,0,0,0,0,29239,14027,13559,14331,14948,15549,1518,1500,1000,1000,1000,5000,0\n4,50000,2,2,1,37,0,0,0,0,0,0,46990,48233,49291,28314,28959,29547,2000,2019,1200,1100,1069,1000,0\n5,50000,1,2,1,57,-1,0,-1,0,0,0,8617,5670,35835,20940,19146,19131,2000,36681,10000,9000,689,679,0\n...\n```\n\n**Example output**\n---\n```\nID,LIMIT_BAL,SEX,EDUCATION,MARRIAGE,AGE,PAY_0,PAY_2,PAY_3,PAY_4,PAY_5,PAY_6,BILL_AMT1,BILL_AMT2,BILL_AMT3,BILL_AMT4,BILL_AMT5,BILL_AMT6,PAY_AMT1,PAY_AMT2,PAY_AMT3,PAY_AMT4,PAY_AMT5,PAY_AMT6,default payment next month\n1,20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0,1\n2,120000,2,2,2,26,-1,2,0,0,0,2,2682,1725,2682,3272,3455,3261,0,1000,1000,1000,0,2000,1\n3,90000,2,2,2,34,0,0,0,0,0,0,29239,14027,13559,14331,14948,15549,1518,1500,1000,1000,1000,5000,0\n4,50000,2,2,1,37,0,0,0,0,0,0,46990,48233,49291,28314,28959,29547,2000,2019,1200,1100,1069,1000,0\n5,50000,1,2,1,57,-1,0,-1,0,0,0,8617,5670,35835,20940,19146,19131,2000,36681,10000,9000,689,679,0\n```\n($p$ = $1, 2, \\ldots M$)\n", "config": "{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [],\n \"runtime\": {\n \"timeout_seconds\": 1800\n },\n \"tag\": \"db\"\n}\n"}
|
| 215 |
{"problem_id": "mamba2_scan", "category": "research", "statement": "Mamba2 Scan Optimization Problem\n==================================\n\nProblem Setting\n---------------\nDesign and optimize high-performance Triton kernels for Mamba2 scan computation on GPU. This problem focuses on implementing efficient sequential scan operations using chunked parallelism with Triton's JIT compilation system.\n\nThe challenge involves optimizing:\n- **Sequential scan computation**: Efficient computation of y_t = a_t * y_{t-1} + b_t * x_t\n- **Chunked parallelism**: Processing sequences in chunks to enable parallelism while maintaining correctness\n- **State management**: Efficiently managing and propagating state between chunks\n- **Memory access patterns**: Efficient loading and storing of X, A, B tensors and state\n- **Block tiling**: Optimal block sizes for GPU execution across different sequence lengths\n- **Performance benchmarking**: Achieving speedup over baseline PyTorch implementations\n\nTarget\n------\n- **Primary**: Maximize geometric mean speedup over baseline (higher is better)\n- **Secondary**: Ensure correctness across diverse sequence lengths and feature dimensions\n- **Tertiary**: Minimize kernel launch overhead and memory usage\n\nAPI Specification\n-----------------\nImplement a `Solution` class that returns a Triton kernel implementation:\n\n```python\nclass Solution:\n def solve(self, spec_path: str = None) -> dict:\n \"\"\"\n Returns a dict with either:\n - {\"code\": \"python_code_string\"}\n - {\"program_path\": \"path/to/kernel.py\"}\n \"\"\"\n # Your implementation\n pass\n```\n\nYour kernel implementation must provide:\n\n```python\nimport torch\nimport triton\nimport triton.language as tl\n\ndef chunk_scan(X: torch.Tensor, A: torch.Tensor, B: torch.Tensor, chunk: int = 128, BD: int = 128) -> torch.Tensor:\n \"\"\"\n Mamba2 chunked scan computation.\n \n Args:\n X: Input tensor of shape (L, D) - input sequence (float16)\n A: Input tensor of shape (L, D) - decay factors (float16)\n B: Input tensor of shape (L, D) - input weights (float16)\n chunk: Chunk size for parallel processing (default 128)\n BD: Block dimension for feature dimension tiling (default 128)\n \n Returns:\n Output tensor of shape (L, D) - scan output (float16)\n \"\"\"\n # Your implementation\n pass\n```\n\nInput Specifications\n--------------------\n- **X**: Input tensor of shape `(L, D)` where:\n - `L`: Sequence length (tested with 2048, 4096)\n - `D`: Feature dimension (typically 512)\n- **A**: Decay factor tensor of shape `(L, D)` (float16, typically |A| < 0.5)\n- **B**: Input weight tensor of shape `(L, D)` (float16)\n- All inputs are `torch.float16` and on CUDA device\n- `chunk`: Chunk size for parallel processing (default 128)\n- `BD`: Block dimension for feature dimension tiling (default 128)\n- **Constraint**: L must be divisible by chunk\n\nOutput Specifications\n--------------------\n- Output tensor of shape `(L, D)` matching the input dimensions\n- Output dtype: `torch.float16`\n- Output device: Same as input (CUDA)\n\nCorrectness Requirements\n------------------------\n- Numerical correctness verified against PyTorch baseline implementation\n- Relative tolerance: 1e-2, Absolute tolerance: 5e-3\n- All test cases must pass for any score above 0\n- Sequential dependency must be correctly maintained: y_t = a_t * y_{t-1} + b_t * x_t\n\nScoring (0-100)\n---------------\nPerformance is measured against GPU baseline implementations:\n\n```\ngeometric_mean_gpu_time = geometric_mean(gpu_baseline_times)\ngeometric_mean_answer_time = geometric_mean(answer_times)\n\n# Linear interpolation: 0 points = 1x GPU baseline, 100 points = 200x GPU baseline\ntarget_time_0 = geometric_mean_gpu_time # 0 points (1x GPU baseline)\ntarget_time_100 = geometric_mean_gpu_time / 200.0 # 100 points (200x speedup over GPU)\nscore = 100 * (target_time_0 - geometric_mean_answer_time) / (target_time_0 - target_time_100)\n```\n\n- 0 points = 1x GPU baseline performance\n- 100 points = 200x speedup over GPU baseline\n- Score is linearly interpolated between these two points\n\nNote: Correctness is verified against GPU baseline, and scoring spans from 1x GPU baseline (0 points) to 200x GPU baseline (100 points).\n\nEvaluation Details\n------------------\n- Test cases: L = 2048, 4096 (with D = 512)\n- Warmup phase: 10 iterations to stabilize GPU clocks and caches\n- Random seed: Fixed seed (0) for reproducible data generation\n- Strict correctness: Any test failure results in score of 0\n- Chunk size: 128, BD: 128\n\nAdditional Notes\n----------------\n- The benchmark uses float32 for PyTorch baseline (for numerical stability) but float16 for answer evaluation\n- Sequential scan operation: y_t = a_t * y_{t-1} + b_t * x_t\n- Chunked parallelism: Process sequence in chunks, maintaining state between chunks\n- State propagation: State must be correctly propagated from one chunk to the next\n- Consider using block tiling along the feature dimension (BD) for parallelism\n\n", "config": "tag: hpc\ndependencies:\n uv_project: resources\nruntime:\n environment: \"Triton 3.2.0 with CUDA 12.2 (triton-tlx image)\"\n docker:\n image: andylizf/triton-tlx:tlx-nv-cu122\n gpu: true\n"}
|
| 216 |
{"problem_id": "mixed_gemm", "category": "research", "statement": "Mixed GEMM Optimization Problem\n=================================\n\nProblem Setting\n---------------\nDesign and optimize high-performance Triton kernels for Mixed GEMM (Linear + Bias + GELU) computation on GPU. This problem focuses on implementing efficient fused kernels that combine matrix multiplication, bias addition, and GELU activation using Triton's JIT compilation system.\n\nThe challenge involves optimizing:\n- **Fused computation**: Efficiently combining linear layer (X @ W + B) with GELU activation\n- **Memory access patterns**: Efficient loading and storing of X, W, B tensors\n- **Mixed precision**: Handling float16 inputs/outputs with float32 bias and accumulation\n- **GELU activation**: Implementing efficient GELU computation using CUDA libdevice functions\n- **Block tiling**: Optimal block sizes for GPU execution across different matrix sizes\n- **Performance benchmarking**: Achieving speedup over baseline PyTorch implementations\n\nTarget\n------\n- **Primary**: Maximize geometric mean speedup over baseline (higher is better)\n- **Secondary**: Ensure correctness across diverse matrix sizes\n- **Tertiary**: Minimize kernel launch overhead and memory usage\n\nAPI Specification\n-----------------\nImplement a `Solution` class that returns a Triton kernel implementation:\n\n```python\nclass Solution:\n def solve(self, spec_path: str = None) -> dict:\n \"\"\"\n Returns a dict with either:\n - {\"code\": \"python_code_string\"}\n - {\"program_path\": \"path/to/kernel.py\"}\n \"\"\"\n # Your implementation\n pass\n```\n\nYour kernel implementation must provide:\n\n```python\nimport torch\nimport triton\nimport triton.language as tl\n\ndef linear_gelu(X: torch.Tensor, W: torch.Tensor, B: torch.Tensor) -> torch.Tensor:\n \"\"\"\n Linear layer with GELU activation computation.\n \n Args:\n X: Input tensor of shape (M, K) - input features (float16)\n W: Weight tensor of shape (K, N) - weight matrix (float16)\n B: Bias tensor of shape (N,) - bias vector (float32)\n \n Returns:\n Output tensor of shape (M, N) - output with GELU activation (float16)\n \"\"\"\n # Your implementation\n pass\n```\n\nInput Specifications\n--------------------\n- **X**: Input tensor of shape `(M, K)` where:\n - `M`: Batch size (tested with 512, 1024)\n - `K`: Input feature dimension (typically 4096)\n - dtype: `torch.float16`\n- **W**: Weight tensor of shape `(K, N)` where:\n - `N`: Output feature dimension (typically 4096)\n - dtype: `torch.float16`\n- **B**: Bias tensor of shape `(N,)` where:\n - dtype: `torch.float32`\n- All inputs are on CUDA device\n\nOutput Specifications\n--------------------\n- Output tensor of shape `(M, N)` matching the input batch and output feature dimensions\n- Output dtype: `torch.float16`\n- Output device: Same as input (CUDA)\n\nCorrectness Requirements\n------------------------\n- Numerical correctness verified against PyTorch baseline implementation\n- Relative tolerance: 1e-2, Absolute tolerance: 5e-3\n- All test cases must pass for any score above 0\n- GELU activation must be correctly implemented\n\nScoring (0-100)\n---------------\nPerformance is measured against GPU baseline implementations:\n\n```\ngeometric_mean_gpu_time = geometric_mean(gpu_baseline_times)\ngeometric_mean_answer_time = geometric_mean(answer_times)\n\n# Linear interpolation: 0 points = 1x GPU baseline, 100 points = 3x GPU baseline\ntarget_time_0 = geometric_mean_gpu_time # 0 points (1x GPU baseline)\ntarget_time_100 = geometric_mean_gpu_time / 3.0 # 100 points (3x speedup over GPU)\nscore = 100 * (target_time_0 - geometric_mean_answer_time) / (target_time_0 - target_time_100)\n```\n\n- 0 points = 1x GPU baseline performance\n- 100 points = 3x speedup over GPU baseline\n- Score is linearly interpolated between these two points\n\nNote: Correctness is verified against GPU baseline, and scoring spans from 1x GPU baseline (0 points) to 3x GPU baseline (100 points).\n\nEvaluation Details\n------------------\n- Test cases: M = 512, 1024 (with N = 4096, K = 4096)\n- Warmup phase: 10 iterations to stabilize GPU clocks and caches\n- Random seed: Fixed seed (0) for reproducible data generation\n- Strict correctness: Any test failure results in score of 0\n\nAdditional Notes\n----------------\n- The benchmark uses float32 for PyTorch baseline (for numerical stability) but float16 for answer evaluation\n- GELU formula: gelu(x) = x * 0.5 * (1.0 + erf(x * 0.7071067811865476))\n- Consider using CUDA libdevice erf function: `tl.extra.cuda.libdevice.erf`\n- Accumulation should use float32 for numerical stability\n- Bias addition should be done after matrix multiplication but before GELU\n\n", "config": "dependencies:\n uv_project: resources\ntag: hpc\nruntime:\n environment: \"Triton 3.2.0 with CUDA 12.2 (triton-tlx image)\"\n docker:\n image: andylizf/triton-tlx:tlx-nv-cu122\n gpu: true\n"}
|
| 217 |
+
{"problem_id": "nbody_simulation/random_100k", "category": "research", "statement": "N-Body Simulation Problem - 100,000 Particles\n=============================================\n\nProblem Setting\n---------------\nDesign and optimize a high-performance parallel N-body simulation. In physics and astronomy, an N-body simulation models the dynamics of particles under gravitational forces. The available hardware is an AWS c7i.4xlarge.\n\nThe challenge involves optimizing:\n- **Loop parallelization**: Efficient parallel force computation across particles\n- **Acceleration structures**: Use structures such as quad-tree for O(N log N) instead of O(N²), or other structures.\n- **Load balancing**: Handling varying workloads per particle\n- **Parallel Programming Libraries**: Proper use of libraries like OpenMP\n\nThis variant tests performance on **100,000 particles** with 3 simulation iterations.\n\nTarget\n------\n- **Primary**: Ensure numerical correctness (tolerance: 1e-2)\n- **Secondary**: Maximize speedup over parallel brute-force baseline (higher is better)\n- **Tertiary**: Use algorithmic improvements (quad-tree, spatial hashing) to beat O(N²)\n\nSolution Format\n---------------\nSubmit a single C++ file (`.cpp`) that implements a `Simulator` class:\n\n```cpp\n#include \"world.h\"\n#include <omp.h>\n\nclass MySimulator : public Simulator {\nprivate:\n // Persistent state across simulation steps\n int numThreads = 8;\n // Could store acceleration structures, pre-allocated buffers, etc.\n\npublic:\n void init(int numParticles, StepParameters params) override {\n // Called once before simulation starts\n // Set thread count, pre-allocate structures, etc.\n omp_set_num_threads(numThreads);\n }\n\n void simulateStep(std::vector<Particle> &particles,\n std::vector<Particle> &newParticles,\n StepParameters params) override {\n // Called each simulation step\n // For each particle i:\n // 1. Compute total force from particles within params.cullRadius\n // 2. Update particle using updateParticle()\n // 3. Store result in newParticles[i]\n }\n};\n\n// Factory function - must be implemented\nSimulator* createSimulator() {\n return new MySimulator();\n}\n```\n\nProvided Types and Functions (in world.h)\n-----------------------------------------\n```cpp\nstruct Vec2 {\n float x, y;\n // Operators: +, -, *, length(), length2()\n};\n\nstruct Particle {\n int id;\n float mass;\n Vec2 position;\n Vec2 velocity;\n};\n\nstruct StepParameters {\n float deltaTime = 0.2f;\n float cullRadius = 1.0f; // Only consider particles within this distance\n};\n\n// Simulator base class\nclass Simulator {\npublic:\n virtual ~Simulator() = default;\n virtual void init(int numParticles, StepParameters params) {} // Optional\n virtual void simulateStep(std::vector<Particle> &particles,\n std::vector<Particle> &newParticles,\n StepParameters params) = 0; // Required\n};\n\n// Compute gravitational force between two particles\n// Returns Vec2(0,0) if distance > cullRadius or distance < 1e-3\ninline Vec2 computeForce(const Particle &target, const Particle &attractor,\n float cullRadius) {\n auto dir = (attractor.position - target.position);\n auto dist = dir.length();\n if (dist < 1e-3f)\n return Vec2(0.0f, 0.0f);\n dir *= (1.0f / dist);\n if (dist > cullRadius)\n return Vec2(0.0f, 0.0f);\n if (dist < 1e-1f)\n dist = 1e-1f;\n const float G = 0.01f;\n Vec2 force = dir * target.mass * attractor.mass * (G / (dist * dist));\n if (dist > cullRadius * 0.75f) {\n float decay = 1.0f - (dist - cullRadius * 0.75f) / (cullRadius * 0.25f);\n force *= decay;\n }\n return force;\n}\n\n// Apply force to particle and integrate position/velocity\ninline Particle updateParticle(const Particle &pi, Vec2 force,\n float deltaTime) {\n Particle result = pi;\n result.velocity += force * (deltaTime / pi.mass);\n result.position += result.velocity * deltaTime;\n return result;\n}\n```\n\nBaseline\n--------\nThe baseline is a simple OpenMP parallel brute-force O(N²) implementation:\n\n```cpp\n// Baseline for N-body simulation - simple OpenMP parallel brute-force\n// O(N²) approach with parallel outer loop\n// Solutions should aim to beat this baseline\n\n#include \"world.h\"\n#include <omp.h>\n\nclass BaselineSimulator : public Simulator {\nprivate:\n int numThreads = 8;\n \npublic:\n void init(int numParticles, StepParameters params) override {\n omp_set_num_threads(numThreads);\n }\n \n void simulateStep(std::vector<Particle> &particles,\n std::vector<Particle> &newParticles,\n StepParameters params) override {\n #pragma omp parallel for schedule(dynamic, 16)\n for (int i = 0; i < (int)particles.size(); i++) {\n auto pi = particles[i];\n Vec2 force = Vec2(0.0f, 0.0f);\n \n for (size_t j = 0; j < particles.size(); j++) {\n if (j == (size_t)i) continue;\n if ((pi.position - particles[j].position).length() < params.cullRadius) {\n force += computeForce(pi, particles[j], params.cullRadius);\n }\n }\n \n newParticles[i] = updateParticle(pi, force, params.deltaTime);\n }\n }\n};\n\nSimulator* createSimulator() {\n return new BaselineSimulator();\n}\n```\n\nTo beat the baseline, use algorithmic improvements like acceleration structures.\n\nPlease generate a `.cpp` file that follows the solution's interface above, with the exact same\nsignatures. The `Simulator` you write will be used in the following way: \n\n```cpp\ndouble runSimulation(World& world, Simulator* sim, \n StepParameters params, int numIterations) {\n Timer timer;\n timer.reset();\n \n // Initialize simulator at the start of each run (clean state)\n sim->init(world.particles.size(), params);\n \n for (int iter = 0; iter < numIterations; iter++) {\n world.newParticles.resize(world.particles.size());\n sim->simulateStep(world.particles, world.newParticles, params);\n world.particles.swap(world.newParticles);\n }\n \n return timer.elapsed();\n}\n```\n\nCompilation\n-----------\nYour code is compiled with:\n```bash\ng++ -O2 -fopenmp -std=c++17 -I. -o benchmark solution.cpp\n```\n\nRequirements:\n- Can use OpenMP for parallelization\n- Must implement a `Simulator` subclass and `createSimulator()` factory function\n- May define additional helper classes/functions as needed\n- Do NOT modify `computeForce` or `updateParticle` functions\n\nCorrectness\n-----------\n\nWe will use the `BaselineSimulator` to get a reference particles positions and compare the solution you generated with the following code. We use a tolerance of `1e-2f`. If you fail the correctness check, you will get a score of zero.\n\n```cpp\nbool checkForCorrectness(const World& refW, const World& w, float tolerance = 1e-2f) {\n if (w.particles.size() != refW.particles.size()) {\n std::cerr << \"Mismatch: number of particles \" << w.particles.size()\n << \" does not match reference \" << refW.particles.size() << std::endl;\n return false;\n }\n\n for (size_t i = 0; i < w.particles.size(); i++) {\n auto errorX = std::abs(w.particles[i].position.x - refW.particles[i].position.x);\n auto errorY = std::abs(w.particles[i].position.y - refW.particles[i].position.y);\n if (errorX > tolerance || errorY > tolerance) {\n std::cerr << \"Mismatch at index \" << i\n << \": result (\" << w.particles[i].position.x << \", \"\n << w.particles[i].position.y << \")\"\n << \" should be (\" << refW.particles[i].position.x << \", \"\n << refW.particles[i].position.y << \")\" << std::endl;\n return false;\n }\n }\n return true;\n}\n```\n\nScoring (0-100)\n---------------\nPerformance is measured by speedup over the parallel brute-force baseline:\n\n```\nspeedup = baseline_time / solution_time\nraw_score = min(speedup, 10.0) # Cap at 10x speedup\nscore = (raw_score - 1.0) / 9.0 * 100 # Map 1x-10x to 0-100\n```\n\n- 0 points = No speedup (1x baseline performance)\n- ~11 points = 2x speedup\n- ~33 points = 4x speedup\n- ~56 points = 6x speedup\n- 100 points = 10x+ speedup\n\nNote: With 100k particles, algorithmic improvements can yield massive speedups.\nThe brute-force baseline is extremely slow, so good solutions should achieve high speedups.\n\nEvaluation Details\n------------------\n- Tested with 100,000 particles\n- 3 simulation iterations\n- Space size: 100.0, cullRadius: 25.0\n- Performance measured as median of 3 runs\n- Correctness verified with tolerance: position error < 1e-2\n- Fixed random seed for reproducibility\n", "config": "dependencies:\n uv_project: resources\ntag: hpc\nruntime:\n timeout_seconds: 600\n environment: \"C++17 with OpenMP (GCC with libgomp1) on Ubuntu 22.04, 16 vCPUs\"\n resources:\n cloud: aws\n instance_type: c7i.4xlarge\n cpus: \"16\"\n memory: \"32\"\n"}
|
| 218 |
+
{"problem_id": "nbody_simulation/random_10k", "category": "research", "statement": "N-Body Simulation Problem - 10,000 Particles\n=============================================\n\nProblem Setting\n---------------\nDesign and optimize a high-performance parallel N-body simulation. In physics and astronomy, an N-body simulation models the dynamics of particles under gravitational forces. The available hardware is an AWS c7i.4xlarge.\n\nThe challenge involves optimizing:\n- **Loop parallelization**: Efficient parallel force computation across particles\n- **Acceleration structures**: Use structures such as quad-tree for O(N log N) instead of O(N²), or other structures.\n- **Load balancing**: Handling varying workloads per particle\n- **Parallel Programming Libraries**: Proper use of libraries like OpenMP\n\nThis variant tests performance on **10,000 particles** with 5 simulation iterations.\n\nTarget\n------\n- **Primary**: Ensure numerical correctness (tolerance: 1e-2)\n- **Secondary**: Maximize speedup over parallel brute-force baseline (higher is better)\n- **Tertiary**: Use algorithmic improvements (quad-tree, spatial hashing) to beat O(N²)\n\nSolution Format\n---------------\nSubmit a single C++ file (`.cpp`) that implements a `Simulator` class:\n\n```cpp\n#include \"world.h\"\n#include <omp.h>\n\nclass MySimulator : public Simulator {\nprivate:\n // Persistent state across simulation steps\n int numThreads = 8;\n // Could store acceleration structures, pre-allocated buffers, etc.\n\npublic:\n void init(int numParticles, StepParameters params) override {\n // Called once before simulation starts\n // Set thread count, pre-allocate structures, etc.\n omp_set_num_threads(numThreads);\n }\n\n void simulateStep(std::vector<Particle> &particles,\n std::vector<Particle> &newParticles,\n StepParameters params) override {\n // Called each simulation step\n // For each particle i:\n // 1. Compute total force from particles within params.cullRadius\n // 2. Update particle using updateParticle()\n // 3. Store result in newParticles[i]\n }\n};\n\n// Factory function - must be implemented\nSimulator* createSimulator() {\n return new MySimulator();\n}\n```\n\nProvided Types and Functions (in world.h)\n-----------------------------------------\n```cpp\nstruct Vec2 {\n float x, y;\n // Operators: +, -, *, length(), length2()\n};\n\nstruct Particle {\n int id;\n float mass;\n Vec2 position;\n Vec2 velocity;\n};\n\nstruct StepParameters {\n float deltaTime = 0.2f;\n float cullRadius = 1.0f; // Only consider particles within this distance\n};\n\n// Simulator base class\nclass Simulator {\npublic:\n virtual ~Simulator() = default;\n virtual void init(int numParticles, StepParameters params) {} // Optional\n virtual void simulateStep(std::vector<Particle> &particles,\n std::vector<Particle> &newParticles,\n StepParameters params) = 0; // Required\n};\n\n// Compute gravitational force between two particles\n// Returns Vec2(0,0) if distance > cullRadius or distance < 1e-3\ninline Vec2 computeForce(const Particle &target, const Particle &attractor,\n float cullRadius) {\n auto dir = (attractor.position - target.position);\n auto dist = dir.length();\n if (dist < 1e-3f)\n return Vec2(0.0f, 0.0f);\n dir *= (1.0f / dist);\n if (dist > cullRadius)\n return Vec2(0.0f, 0.0f);\n if (dist < 1e-1f)\n dist = 1e-1f;\n const float G = 0.01f;\n Vec2 force = dir * target.mass * attractor.mass * (G / (dist * dist));\n if (dist > cullRadius * 0.75f) {\n float decay = 1.0f - (dist - cullRadius * 0.75f) / (cullRadius * 0.25f);\n force *= decay;\n }\n return force;\n}\n\n// Apply force to particle and integrate position/velocity\ninline Particle updateParticle(const Particle &pi, Vec2 force,\n float deltaTime) {\n Particle result = pi;\n result.velocity += force * (deltaTime / pi.mass);\n result.position += result.velocity * deltaTime;\n return result;\n}\n```\n\nBaseline\n--------\nThe baseline is a simple OpenMP parallel brute-force O(N²) implementation:\n\n```cpp\n// Baseline for N-body simulation - simple OpenMP parallel brute-force\n// O(N²) approach with parallel outer loop\n// Solutions should aim to beat this baseline\n\n#include \"world.h\"\n#include <omp.h>\n\nclass BaselineSimulator : public Simulator {\nprivate:\n int numThreads = 8;\n \npublic:\n void init(int numParticles, StepParameters params) override {\n omp_set_num_threads(numThreads);\n }\n \n void simulateStep(std::vector<Particle> &particles,\n std::vector<Particle> &newParticles,\n StepParameters params) override {\n #pragma omp parallel for schedule(dynamic, 16)\n for (int i = 0; i < (int)particles.size(); i++) {\n auto pi = particles[i];\n Vec2 force = Vec2(0.0f, 0.0f);\n \n for (size_t j = 0; j < particles.size(); j++) {\n if (j == (size_t)i) continue;\n if ((pi.position - particles[j].position).length() < params.cullRadius) {\n force += computeForce(pi, particles[j], params.cullRadius);\n }\n }\n \n newParticles[i] = updateParticle(pi, force, params.deltaTime);\n }\n }\n};\n\nSimulator* createSimulator() {\n return new BaselineSimulator();\n}\n```\n\nTo beat the baseline, use algorithmic improvements like acceleration structures.\n\nPlease generate a `.cpp` file that follows the solution's interface above, with the exact same\nsignatures. The `Simulator` you write will be used in the following way: \n\n```cpp\ndouble runSimulation(World& world, Simulator* sim, \n StepParameters params, int numIterations) {\n Timer timer;\n timer.reset();\n \n // Initialize simulator at the start of each run (clean state)\n sim->init(world.particles.size(), params);\n \n for (int iter = 0; iter < numIterations; iter++) {\n world.newParticles.resize(world.particles.size());\n sim->simulateStep(world.particles, world.newParticles, params);\n world.particles.swap(world.newParticles);\n }\n \n return timer.elapsed();\n}\n```\n\nCompilation\n-----------\nYour code is compiled with:\n```bash\ng++ -O2 -fopenmp -std=c++17 -I. -o benchmark solution.cpp\n```\n\nRequirements:\n- Can use OpenMP for parallelization\n- Must implement a `Simulator` subclass and `createSimulator()` factory function\n- May define additional helper classes/functions as needed\n- Do NOT modify `computeForce` or `updateParticle` functions\n\nCorrectness\n-----------\n\nWe will use the `BaselineSimulator` to get a reference particles positions and compare the solution you generated with the following code. We use a tolerance of `1e-2f`. If you fail the correctness check, you will get a score of zero.\n\n```cpp\nbool checkForCorrectness(const World& refW, const World& w, float tolerance = 1e-2f) {\n if (w.particles.size() != refW.particles.size()) {\n std::cerr << \"Mismatch: number of particles \" << w.particles.size()\n << \" does not match reference \" << refW.particles.size() << std::endl;\n return false;\n }\n\n for (size_t i = 0; i < w.particles.size(); i++) {\n auto errorX = std::abs(w.particles[i].position.x - refW.particles[i].position.x);\n auto errorY = std::abs(w.particles[i].position.y - refW.particles[i].position.y);\n if (errorX > tolerance || errorY > tolerance) {\n std::cerr << \"Mismatch at index \" << i\n << \": result (\" << w.particles[i].position.x << \", \"\n << w.particles[i].position.y << \")\"\n << \" should be (\" << refW.particles[i].position.x << \", \"\n << refW.particles[i].position.y << \")\" << std::endl;\n return false;\n }\n }\n return true;\n}\n```\n\nScoring (0-100)\n---------------\nPerformance is measured by speedup over the parallel brute-force baseline:\n\n```\nspeedup = baseline_time / solution_time\nraw_score = min(speedup, 3.0) # Cap at 3x speedup\nscore = (raw_score - 1.0) / 2.0 * 100 # Map 1x-3x to 0-100\n```\n\n- 0 points = No speedup (1x baseline performance)\n- 50 points = 2x speedup\n- 100 points = 3x+ speedup\n\nNote: Since baseline is already parallelized, achieving speedup requires algorithmic improvements.\n\nEvaluation Details\n------------------\n- Tested with 10,000 particles\n- 5 simulation iterations\n- Space size: 100.0, cullRadius: 25.0\n- Performance measured as median of 3 runs\n- Correctness verified with tolerance: position error < 1e-2\n- Fixed random seed for reproducibility\n", "config": "dependencies:\n uv_project: resources\ntag: hpc\nruntime:\n timeout_seconds: 600\n environment: \"C++17 with OpenMP (GCC with libgomp1) on Ubuntu 22.04, 16 vCPUs\"\n resources:\n cloud: aws\n instance_type: c7i.4xlarge\n cpus: \"16\"\n memory: \"32\"\n"}
|
| 219 |
{"problem_id": "poc_generation/heap_buffer_overflow", "category": "research", "statement": "", "config": "{\"tag\": \"security\"}\n"}
|
| 220 |
{"problem_id": "poc_generation/heap_use_after_free", "category": "research", "statement": "", "config": "tag: security\n{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [\n \"arvo:47101\"\n ],\n \"tag\": \"security\"\n}\n"}
|
| 221 |
{"problem_id": "poc_generation/stack_buffer_overflow", "category": "research", "statement": "", "config": "{\"tag\": \"security\"}\n"}
|