KMasaki/PowerCLIP-ViT-B-16-CC12M
Zero-Shot Image Classification • Updated
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
jpg: struct<bytes: binary, path: string>
child 0, bytes: binary
child 1, path: string
json: struct<caption: string, error_message: null, exif: string, height: int64, key: string, original_heig (... 76 chars omitted)
child 0, caption: string
child 1, error_message: null
child 2, exif: string
child 3, height: int64
child 4, key: string
child 5, original_height: int64
child 6, original_width: int64
child 7, status: string
child 8, url: string
child 9, width: int64
txt: string
njson: binary
samlens.npy: list<item: int64>
child 0, item: int64
samcat.npy: list<item: int64>
child 0, item: int64
__key__: string
__url__: string
to
{'jpg': Image(mode=None, decode=True), 'txt': Value('string'), 'njson': Value('string'), 'samlens.npy': Value('binary'), 'samcat.npy': Value('binary')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2567, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2102, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2134, in _iter_arrow
pa_table = cast_table_to_features(pa_table, self.features)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2192, in cast_table_to_features
raise CastError(
datasets.table.CastError: Couldn't cast
jpg: struct<bytes: binary, path: string>
child 0, bytes: binary
child 1, path: string
json: struct<caption: string, error_message: null, exif: string, height: int64, key: string, original_heig (... 76 chars omitted)
child 0, caption: string
child 1, error_message: null
child 2, exif: string
child 3, height: int64
child 4, key: string
child 5, original_height: int64
child 6, original_width: int64
child 7, status: string
child 8, url: string
child 9, width: int64
txt: string
njson: binary
samlens.npy: list<item: int64>
child 0, item: int64
samcat.npy: list<item: int64>
child 0, item: int64
__key__: string
__url__: string
to
{'jpg': Image(mode=None, decode=True), 'txt': Value('string'), 'njson': Value('string'), 'samlens.npy': Value('binary'), 'samcat.npy': Value('binary')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Pre-processed CC12M dataset for training PowerCLIP.
Each sample contains the original image and caption plus two precomputed annotations:
.njson) — NP/PP/VP/S constituent phrases extracted via spaCy, with token indices aligned to OpenCLIP's SimpleTokenizer (CSR format)..samlens.npy + .samcat.npy) — Segment Anything Model (SAM ViT-H) region bounding boxes converted to ViT patch-grid token indices (CSR format, patch size 16, image size 224).WebDataset tar archives (2176 shards). Each sample contains:
{key}.jpg # Image
{key}.txt # Caption
{key}.json # Metadata (original CC12M fields)
{key}.njson # Parse-tree phrase indices (CSR: lengths + token IDs)
{key}.samlens.npy # SAM region lengths array
{key}.samcat.npy # SAM region token indices (concatenated)
import webdataset as wds
dataset = wds.WebDataset("cc12m-train-{0000..2175}.tar")
for sample in dataset:
image = sample["jpg"] # raw JPEG bytes
caption = sample["txt"] # caption string
# SAM regions and parse-tree phrases are loaded automatically
# by PowerCLIP's data pipeline
Or use with PowerCLIP directly:
torchrun --nproc_per_node 8 -m training.main \
--train-data "cc12m-train-{0000..2175}.tar" \
...
en_core_web_sm