Update README.md
Browse files
README.md
CHANGED
|
@@ -47,6 +47,12 @@ dataset_info:
|
|
| 47 |
|
| 48 |
## ScienceAgentBench
|
| 49 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
The advancements of language language models (LLMs) have piqued growing interest in developing LLM-based language agents to automate scientific discovery end-to-end, which has sparked both excitement and skepticism about their true capabilities.
|
| 51 |
In this work, we call for rigorous assessment of agents on individual tasks in a scientific workflow before making bold claims on end-to-end automation.
|
| 52 |
To this end, we present ScienceAgentBench, a new benchmark for evaluating language agents for data-driven scientific discovery:
|
|
|
|
| 47 |
|
| 48 |
## ScienceAgentBench
|
| 49 |
|
| 50 |
+
### Update 04/30/2026: To mitigate false negatives in evaluation, we have released a verified version of ScienceAgentBench. Please load our benchmark using the following code going forward:
|
| 51 |
+
```py
|
| 52 |
+
from datasets import load_dataset
|
| 53 |
+
ds = load_dataset("osunlp/ScienceAgentBench", split="verified")
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
The advancements of language language models (LLMs) have piqued growing interest in developing LLM-based language agents to automate scientific discovery end-to-end, which has sparked both excitement and skepticism about their true capabilities.
|
| 57 |
In this work, we call for rigorous assessment of agents on individual tasks in a scientific workflow before making bold claims on end-to-end automation.
|
| 58 |
To this end, we present ScienceAgentBench, a new benchmark for evaluating language agents for data-driven scientific discovery:
|