| | --- |
| | license: other |
| | license_name: custom-apple-license |
| | license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE |
| | viewer: false |
| | task_categories: |
| | - text-to-image |
| | - image-to-text |
| | language: |
| | - en |
| | library_name: tic-clip |
| | --- |
| | |
| | # Dataset Card for TiC-DataComp |
| |
|
| | <!-- Provide a quick summary of the dataset. --> |
| |
|
| | This dataset containts metadata for TiC-DataComp benchmark for time-continual learning of image-text models. |
| | The dataset containts timestamp information for DataComp-1B in the form of UIDs groupings by year/month sourced from the original CommonCrawl. |
| | We also release UIDs for our TiC-DataCompNet and TiC-DataComp-Retrieval evaluations for continual learning of CLIP models. |
| | For details on how to use the metadata, please visit our [github repository](https://github.com/apple/ml-tic-clip). |
| |
|
| | ## Dataset Details |
| |
|
| | ### Dataset Description |
| |
|
| | <!-- Provide a longer summary of what this dataset is. --> |
| |
|
| | Keeping large foundation models up to date on latest data is inherently expensive. |
| | To avoid the prohibitive costs of constantly retraining, it is imperative to continually train these models. |
| | This problem is exacerbated by the lack of any large scale continual learning benchmarks or baselines. |
| | We introduce the first set of web-scale Time-Continual (TiC) benchmarks for training vision-language models: |
| | TiC-DataComp, TiC-YFCC, and TiC-Redcaps. TiC-DataComp, our largest dataset, |
| | contains over 12.7B timestamped image-text pairs spanning 9 years (2014-2022). |
| | We first use our benchmarks to curate various dynamic evaluations to measure temporal robustness of existing models. |
| | We show OpenAI's CLIP (trained on data up to 2020) loses ≈8% zero-shot accuracy on our curated retrieval task from 2021-2022 compared with more recently trained models in OpenCLIP repository. |
| | We then study how to efficiently train models on time-continuous data. |
| | We demonstrate that a simple rehearsal-based approach that continues training from the last checkpoint and replays old data reduces compute by 2.5× when compared to the standard practice of retraining from scratch. |
| | Code is available at [this https URL](https://github.com/apple/ml-tic-clip). |
| |
|
| |
|
| | - **Developed by:** Apple |
| | - **License:** See [LICENSE](https://github.com/apple/ml-tic-clip/blob/main/LICENSE) |
| |
|
| | ## Uses |
| |
|
| | <!-- Address questions around how the dataset is intended to be used. --> |
| |
|
| | Researchers can use TiC-DataComp dataset to design and evaluate continual learning methods at large-scale for image-text models. |
| |
|
| | ## Dataset Structure |
| |
|
| | <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
| |
|
| | ``` |
| | - tic-datacomp_training_monthly/<YYYMM>.npy |
| | - List of UIDs for each month. |
| | - tic-datacomp_training_yearly_noeval/<YYY>.npy |
| | - List of UIDs for each year after removing yearly evaluation sets. |
| | - tic-datacomp_retrieval_evals_year2uids: TiC-DataComp-Retrieval evaluation UIDs per year. |
| | - tic-datacompnet_year2uids: TiC-DataCompNet evaluation UIDs per year. |
| | ``` |
| |
|
| | ## Citation |
| |
|
| | **[TiC-CLIP: Continual Training of CLIP Models](https://arxiv.org/abs/2310.16226). (ICLR 2024)** |
| | *Garg, S., Farajtabar, M., Pouransari, H., Vemulapalli, R., Mehta, S., Tuzel, O., Shankar, V. and Faghri, F..* |
| |
|
| | ```bibtex |
| | @inproceedings{garg2024tic, |
| | title={TiC-CLIP: Continual Training of CLIP Models}, |
| | author={Garg, Saurabh and Farajtabar, Mehrdad and Pouransari, Hadi and Vemulapalli, Raviteja and Mehta, Sachin and Tuzel, Oncel and Shankar, Vaishaal and Faghri, Fartash}, |
| | booktitle={The Twelfth International Conference on Learning Representations (ICLR)}, |
| | year={2024}, |
| | url={https://openreview.net/forum?id=TLADT8Wrhn} |
| | } |