| | --- |
| | base_model: |
| | - Qwen/Qwen2.5-3B-Instruct |
| | datasets: |
| | - ulab-ai/Time-Bench |
| | license: apache-2.0 |
| | tags: |
| | - temporal-reasoning |
| | - reinforcement-learning |
| | - large-language-models |
| | paperswithcode: |
| | arxiv_id: 2505.13508 |
| | library_name: transformers |
| | pipeline_tag: text-generation |
| | --- |
| | |
| | <center> |
| | <img src="https://cdn-uploads.huggingface.co/production/uploads/65d188a4aa309d842e438ef1/d6YiWBndm7WzANfl3e1qi.png" alt="Output Examples" width="600"> |
| | </center> |
| | |
| | <div align="center"> |
| | <a href="https://huggingface.co/datasets/ulab-ai/Time-Bench"> ๐ <strong>Dataset</strong></a> | <a href="https://github.com/ulab-uiuc/Time-R1">๐ <strong>Code</strong></a> | <a href="https://arxiv.org/abs/2505.13508">๐ <strong>Paper</strong></a> |
| | </div> |
| |
|
| | # Time-R1 Model Series |
| |
|
| | This collection hosts the official checkpoints for the **Time-R1** model, as described in the paper "Time-R1: Towards Comprehensive Temporal Reasoning in LLMs". Time-R1 is a 3B parameter Large Language Model trained with a novel three-stage reinforcement learning curriculum to endow it with comprehensive temporal abilities: understanding, prediction, and creative generation. |
| |
|
| | These models are trained using the [Time-Bench dataset](https://huggingface.co/datasets/ulab-ai/Time-Bench). |
| |
|
| | ## Model Checkpoints |
| |
|
| | We provide several checkpoints representing different stages of the Time-R1 training process: |
| |
|
| | ### Stage 1: Temporal Comprehension Models |
| |
|
| | These models are trained to develop foundational temporal understanding. |
| |
|
| | * **[Time-R1-S1P1](https://huggingface.co/ulab-ai/Time-R1-S1P1):** Checkpoint after Phase 1 of Stage 1 training. |
| | * *Focus: Foundational logic on easy timestamp inference tasks.* |
| | * **[Time-R1-S1P2](https://huggingface.co/ulab-ai/Time-R1-S1P2):** Checkpoint after Phase 2 of Stage 1 training. |
| | * *Focus: Full task exploration on all Stage 1 subtasks with mixed difficulty.* |
| | * **[Time-R1-Theta1](https://huggingface.co/ulab-ai/Time-R1-Theta1):** Checkpoint ฮธโ, after Phase 3 (full Stage 1 training). |
| | * *Focus: Refined precision on all Stage 1 subtasks under stricter evaluation.* |
| | * **[Time-R1-Theta1_prime](https://huggingface.co/ulab-ai/Time-R1-Theta1_prime):** Ablation model ฮธโ', trained for Stage 1 without the dynamic reward design. |
| | * *Focus: Serves as a baseline to evaluate the efficacy of the dynamic reward curriculum.* |
| |
|
| | ### Stage 2: Future Event Time Prediction Model |
| |
|
| | This model builds upon Stage 1 capabilities to predict future event timings. |
| |
|
| | * **[Time-R1-Theta2](https://huggingface.co/ulab-ai/Time-R1-Theta2):** Checkpoint ฮธโ, after Stage 2 training. |
| | * *Focus: Predicting the timing of future events occurring after its initial knowledge cutoff.* |
| |
|
| | Please refer to the [main paper](https://arxiv.org/abs/2505.13508) for detailed discussions on the architecture, training methodology, and comprehensive evaluations. |
| |
|
| | ## How to Use |
| |
|
| | For loading and using these models, please refer to the example scripts and documentation provided in our [GitHub repository](https://github.com/ulab-uiuc/Time-R1). |
| |
|
| | Typically, you can load the models using the Hugging Face `transformers` library: |
| |
|
| | ```python |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | # Example for one of the models (replace with the specific model name) |
| | model_name = "ulab-ai/Time-R1-S1P2" # Or your specific Hugging Face model path |
| | tokenizer = AutoTokenizer.from_pretrained(model_name) |
| | model = AutoModelForCausalLM.from_pretrained(model_name) |
| | # Further usage instructions would go here or in the repository |
| | ``` |
| |
|
| | ## Citations |
| | ```bibtex |
| | @article{liu2025time, |
| | title={Time-R1: Towards Comprehensive Temporal Reasoning in LLMs}, |
| | author={Liu, Zijia and Han, Peixuan and Yu, Haofei and Li, Haoru and You, Jiaxuan}, |
| | journal={arXiv preprint arXiv:2505.13508}, |
| | year={2025} |
| | } |