Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: UnicodeDecodeError
Message: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
for key, table in generator:
^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/text/text.py", line 98, in _generate_tables
batch = f.read(self.config.chunksize)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 844, in read_with_retries
out = read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "<frozen codecs>", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1922, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text string |
|---|
FROM pytorch/pytorch:2.5.1-cuda11.8-cudnn9-devel |
ENV DEBIAN_FRONTEND=noninteractive \ |
PYTHONUNBUFFERED=1 \ |
CUDA_HOME=/usr/local/cuda \ |
PATH="$CUDA_HOME/bin:$PATH" |
RUN apt-get update && apt-get install -y --no-install-recommends \ |
git \ |
curl \ |
ffmpeg \ |
libsm6 \ |
libxext6 \ |
&& apt-get clean && rm -rf /var/lib/apt/lists/* |
WORKDIR /workspace/ |
COPY requirements.txt /workspace/requirements.txt |
RUN pip install --upgrade pip \ |
&& pip install ninja \ |
&& MAX_JOBS=1 pip install flash-attn --no-build-isolation \ |
&& pip install -r requirements.txt \ |
&& pip install opencv-fixer==0.2.5 \ |
&& python -c "from opencv_fixer import AutoFix; AutoFix()" |
CMD ["/bin/bash"] |
Infinity( |
drop_path_rate=0.1 |
(norm0_cond): Identity() |
(text_norm): FastRMSNorm(C=2048, eps=1e-06, elementwise_affine=True) # 1. (text_feature, unlltext_feature): (6,2048) -> (6,2048) |
(text_proj_for_sos): TextAttentivePool( |
(ca): CrossAttention( |
Cq=2048, Ckv=2048, cos_attn=False |
(mat_kv): Linear(in_features=2048, out_features=4096, bias=False) # 2. (text_feature, unlltext_feature): (6,2048) -> (6,4096) -> kv_compact (N, 2, self.num_heads, self.head_dim)(6, 2, 16, 128): (N, 2, self.num_heads, self.head_dim) |
# 3. q_compact (1, 16, 128) -> (2, 16, 128) |
# 4. q_compact (1, 16, 128), kv_compact (6, 2, 16, 128) -> oup (2, 16, 128) -> oup (2, 1, 2048) |
(proj): Linear(in_features=2048, out_features=2048, bias=True) # 5. oup (2, 1, 2048) -> (2, 1, 2048) |
(proj_drop): Identity() # 6. (2, 1, 2048) -> (2, 1, 2048) -> (2, 2048): sos=cond_BD |
) |
) |
(text_proj_for_ca): Sequential( # 7. kv_compact (6, 2048) -> (6, 2048) |
(0): Linear(in_features=2048, out_features=2048, bias=True) |
(1): GELU(approximate='tanh') |
(2): Linear(in_features=2048, out_features=2048, bias=True) |
) |
# 8. last_stage = sos + pos_start: (2, 1, 2048) |
(lvl_embed): Embedding(15, 2048) # 10. last_stage += (2, 1, 2048) |
(norm0_ve): Identity() |
(word_embed): Linear(in_features=32, out_features=2048, bias=True) |
(shared_ada_lin): Sequential( # 9. cond_BD (2, 2048) -> cond_BD_or_gss (2,1,6,2048) |
(0): SiLU() |
(1): SharedAdaLin(in_features=2048, out_features=12288, bias=True) |
) |
(head_nm): AdaLNBeforeHead( |
(ln_wo_grad): LayerNorm((2048,), eps=1e-06, elementwise_affine=False) |
(ada_lin): Sequential( |
(0): SiLU() |
(1): Linear(in_features=2048, out_features=4096, bias=True) |
) |
) |
(head): Linear(in_features=2048, out_features=64, bias=True) |
(block_chunks): ModuleList( |
(0): MultipleLayers( |
(module): ModuleList( |
(0): CrossAttnBlock( |
# 11. gamma1 (2,1,2048), gamma2 (2,1,2048), scale1 (2,1,2048), scale2 (2,1,2048), shift1 (2,1,2048), shift2 (2,1,2048) = (self.ada_gss + cond_BD).unbind(2) |
shared_aln=True, fused_norm=True, ca_gamma=1 |
(drop_path): Identity() |
(sa): SelfAttention( |
using_flash=False, tau=1, cos_attn=True |
(mat_qkv): Linear(in_features=2048, out_features=6144, bias=False) # 12. last_stage: (2, 1, 2048) -> qkv: (2, 1 6144) -> qkv: (2,1,3,16,128) |
# 13. qkv: (2,1,3,16,128) -> qkv: (3,2,16,1,128) -> q(2,16,1,128), k(2,16,1,128), v(2,16,1,128) |
# 14. scaled_dot_product_attention(q,k,v) -> oup(2,1,2048) |
(proj): Linear(in_features=2048, out_features=2048, bias=True) # 15. oup(2,1,2048) -> oup(2,1,2048) |
(proj_drop): Identity() # 16. oup(2,1,2048) -> x(2,1,2048) |
) |
(ca): CrossAttention( |
Cq=2048, Ckv=2048, cos_attn=False |
(mat_q): Linear(in_features=2048, out_features=2048, bias=True) # 19. q: (2,1,2048) -> q_compact: (2,16,128) |
(mat_kv): Linear(in_features=2048, out_features=4096, bias=False) # 18. kv_compact: (6,2048) -> kv_compact: (6,2,16,128) |
(proj): Linear(in_features=2048, out_features=2048, bias=True) |
(proj_drop): Identity() |
) |
(ffn): FFN( |
fused_mlp=False |
(fc1): Linear(in_features=2048, out_features=8192, bias=True) |
(act): GELU(approximate='tanh') |
(fc2): Linear(in_features=8192, out_features=2048, bias=True) |
(drop): Identity() |
) |
(ln_wo_grad): LayerNorm((2048,), eps=1e-06, elementwise_affine=False) |
(ca_norm): LayerNorm((2048,), eps=1e-06, elementwise_affine=True) # 17. x(2,1,2048) -> x(2,1,2048) |
) |
(1-3): 3 x CrossAttnBlock( |
shared_aln=True, fused_norm=True, ca_gamma=1 |
(drop_path): DropPath(...) |
(sa): SelfAttention( |
using_flash=False, tau=1, cos_attn=True |
(mat_qkv): Linear(in_features=2048, out_features=6144, bias=False) |
End of preview.
README.md exists but content is empty.
- Downloads last month
- -