html_url stringlengths 48 51 | title stringlengths 5 280 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k ⌀ | comment_length int64 16 1.52k | text stringlengths 159 54.1k | embeddings sequencelengths 768 768 |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/7033 | `from_generator` does not allow to specify the split name | Thanks for reporting, @pminervini.
I agree we should give the option to define the split name.
Indeed, there is a PR that addresses precisely this issue:
- #7015
I am reviewing it. | ### Describe the bug
I'm building train, dev, and test using `from_generator`; however, in all three cases, the logger prints `Generating train split:`
It's not possible to change the split name since it seems to be hardcoded: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/generator/generator.py
### Steps to reproduce the bug
```
In [1]: from datasets import Dataset
In [2]: def gen():
...: yield {"pokemon": "bulbasaur", "type": "grass"}
...:
In [3]: ds = Dataset.from_generator(gen)
Generating train split: 1 examples [00:00, 133.89 examples/s]
```
### Expected behavior
It should be possible to specify any split name
### Environment info
- `datasets` version: 2.19.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.5
- `huggingface_hub` version: 0.23.3
- PyArrow version: 15.0.0
- Pandas version: 2.0.3
- `fsspec` version: 2023.10.0 | 32 | `from_generator` does not allow to specify the split name
### Describe the bug
I'm building train, dev, and test using `from_generator`; however, in all three cases, the logger prints `Generating train split:`
It's not possible to change the split name since it seems to be hardcoded: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/generator/generator.py
### Steps to reproduce the bug
```
In [1]: from datasets import Dataset
In [2]: def gen():
...: yield {"pokemon": "bulbasaur", "type": "grass"}
...:
In [3]: ds = Dataset.from_generator(gen)
Generating train split: 1 examples [00:00, 133.89 examples/s]
```
### Expected behavior
It should be possible to specify any split name
### Environment info
- `datasets` version: 2.19.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.5
- `huggingface_hub` version: 0.23.3
- PyArrow version: 15.0.0
- Pandas version: 2.0.3
- `fsspec` version: 2023.10.0
Thanks for reporting, @pminervini.
I agree we should give the option to define the split name.
Indeed, there is a PR that addresses precisely this issue:
- #7015
I am reviewing it. | [
-0.08032244443893433,
-0.18822568655014038,
0.051649514585733414,
0.3789212703704834,
0.34946805238723755,
-0.0962342619895935,
0.585552990436554,
0.1975223571062088,
-0.230672687292099,
0.3274785876274109,
0.14162872731685638,
0.3702322840690613,
-0.09433481842279434,
0.24470113217830658,... |
https://github.com/huggingface/datasets/issues/7005 | EmptyDatasetError: The directory at /metadata.jsonl doesn't contain any data files | If you are trying to load your image dataset from a local folder, you should replace "data_dir=path/to/jsonl/metadata.jsonl" with the real folder path in your computer.
https://huggingface.co/docs/datasets/en/image_load#imagefolder | ### Describe the bug
while trying to load custom dataset from jsonl file, I get the error: "metadata.jsonl doesn't contain any data files"
### Steps to reproduce the bug
This is my [metadata_v2.jsonl](https://github.com/user-attachments/files/16016011/metadata_v2.json) file. I have this file in the folder with all images mentioned in that json(l) file.
Through below mentioned command I am trying to load_dataset so that I can upload it as mentioned here on the [official website](https://huggingface.co/docs/datasets/en/image_dataset#upload-dataset-to-the-hub).
````
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_dir="path/to/jsonl/metadata.jsonl")
````
error:
````
EmptyDatasetError Traceback (most recent call last)
Cell In[18], line 3
1 from datasets import load_dataset
----> 3 dataset = load_dataset("imagefolder",
4 data_dir="path/to/jsonl/file/metadata.jsonl")
5 dataset[0]["objects"]
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2594, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2589 verification_mode = VerificationMode(
2590 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
2591 )
2593 # Create a dataset builder
-> 2594 builder_instance = load_dataset_builder(
2595 path=path,
2596 name=name,
2597 data_dir=data_dir,
2598 data_files=data_files,
2599 cache_dir=cache_dir,
2600 features=features,
2601 download_config=download_config,
2602 download_mode=download_mode,
2603 revision=revision,
2604 token=token,
2605 storage_options=storage_options,
2606 trust_remote_code=trust_remote_code,
2607 _require_default_config_name=name is None,
2608 **config_kwargs,
2609 )
2611 # Return iterable dataset in case of streaming
2612 if streaming:
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2266, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)
2264 download_config = download_config.copy() if download_config else DownloadConfig()
2265 download_config.storage_options.update(storage_options)
-> 2266 dataset_module = dataset_module_factory(
2267 path,
2268 revision=revision,
2269 download_config=download_config,
2270 download_mode=download_mode,
2271 data_dir=data_dir,
2272 data_files=data_files,
2273 cache_dir=cache_dir,
2274 trust_remote_code=trust_remote_code,
2275 _require_default_config_name=_require_default_config_name,
2276 _require_custom_configs=bool(config_kwargs),
2277 )
2278 # Get dataset builder class from the processing script
2279 builder_kwargs = dataset_module.builder_kwargs
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1805, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)
1782 # We have several ways to get a dataset builder:
1783 #
1784 # - if path is the name of a packaged dataset module
(...)
1796
1797 # Try packaged
1798 if path in _PACKAGED_DATASETS_MODULES:
1799 return PackagedDatasetModuleFactory(
1800 path,
1801 data_dir=data_dir,
1802 data_files=data_files,
1803 download_config=download_config,
1804 download_mode=download_mode,
-> 1805 ).get_module()
1806 # Try locally
1807 elif path.endswith(filename):
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1140, in PackagedDatasetModuleFactory.get_module(self)
1135 def get_module(self) -> DatasetModule:
1136 base_path = Path(self.data_dir or "").expanduser().resolve().as_posix()
1137 patterns = (
1138 sanitize_patterns(self.data_files)
1139 if self.data_files is not None
-> 1140 else get_data_patterns(base_path, download_config=self.download_config)
1141 )
1142 data_files = DataFilesDict.from_patterns(
1143 patterns,
1144 download_config=self.download_config,
1145 base_path=base_path,
1146 )
1147 supports_metadata = self.name in _MODULE_SUPPORTS_METADATA
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/data_files.py:503, in get_data_patterns(base_path, download_config)
501 return _get_data_files_patterns(resolver)
502 except FileNotFoundError:
--> 503 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None
EmptyDatasetError: The directory at path/to/jsonl/file/metadata.jsonl doesn't contain any data files`
```
### Expected behavior
It should be able load the whole file in a format of "dataset" inside the dataset variable. But it gives error "The directory at "path/to/jsonl/metadata.jsonl" doesn't contain any data files."
### Environment info
I am using conda environment. | 26 | EmptyDatasetError: The directory at /metadata.jsonl doesn't contain any data files
### Describe the bug
while trying to load custom dataset from jsonl file, I get the error: "metadata.jsonl doesn't contain any data files"
### Steps to reproduce the bug
This is my [metadata_v2.jsonl](https://github.com/user-attachments/files/16016011/metadata_v2.json) file. I have this file in the folder with all images mentioned in that json(l) file.
Through below mentioned command I am trying to load_dataset so that I can upload it as mentioned here on the [official website](https://huggingface.co/docs/datasets/en/image_dataset#upload-dataset-to-the-hub).
````
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_dir="path/to/jsonl/metadata.jsonl")
````
error:
````
EmptyDatasetError Traceback (most recent call last)
Cell In[18], line 3
1 from datasets import load_dataset
----> 3 dataset = load_dataset("imagefolder",
4 data_dir="path/to/jsonl/file/metadata.jsonl")
5 dataset[0]["objects"]
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2594, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2589 verification_mode = VerificationMode(
2590 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
2591 )
2593 # Create a dataset builder
-> 2594 builder_instance = load_dataset_builder(
2595 path=path,
2596 name=name,
2597 data_dir=data_dir,
2598 data_files=data_files,
2599 cache_dir=cache_dir,
2600 features=features,
2601 download_config=download_config,
2602 download_mode=download_mode,
2603 revision=revision,
2604 token=token,
2605 storage_options=storage_options,
2606 trust_remote_code=trust_remote_code,
2607 _require_default_config_name=name is None,
2608 **config_kwargs,
2609 )
2611 # Return iterable dataset in case of streaming
2612 if streaming:
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2266, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)
2264 download_config = download_config.copy() if download_config else DownloadConfig()
2265 download_config.storage_options.update(storage_options)
-> 2266 dataset_module = dataset_module_factory(
2267 path,
2268 revision=revision,
2269 download_config=download_config,
2270 download_mode=download_mode,
2271 data_dir=data_dir,
2272 data_files=data_files,
2273 cache_dir=cache_dir,
2274 trust_remote_code=trust_remote_code,
2275 _require_default_config_name=_require_default_config_name,
2276 _require_custom_configs=bool(config_kwargs),
2277 )
2278 # Get dataset builder class from the processing script
2279 builder_kwargs = dataset_module.builder_kwargs
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1805, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)
1782 # We have several ways to get a dataset builder:
1783 #
1784 # - if path is the name of a packaged dataset module
(...)
1796
1797 # Try packaged
1798 if path in _PACKAGED_DATASETS_MODULES:
1799 return PackagedDatasetModuleFactory(
1800 path,
1801 data_dir=data_dir,
1802 data_files=data_files,
1803 download_config=download_config,
1804 download_mode=download_mode,
-> 1805 ).get_module()
1806 # Try locally
1807 elif path.endswith(filename):
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1140, in PackagedDatasetModuleFactory.get_module(self)
1135 def get_module(self) -> DatasetModule:
1136 base_path = Path(self.data_dir or "").expanduser().resolve().as_posix()
1137 patterns = (
1138 sanitize_patterns(self.data_files)
1139 if self.data_files is not None
-> 1140 else get_data_patterns(base_path, download_config=self.download_config)
1141 )
1142 data_files = DataFilesDict.from_patterns(
1143 patterns,
1144 download_config=self.download_config,
1145 base_path=base_path,
1146 )
1147 supports_metadata = self.name in _MODULE_SUPPORTS_METADATA
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/data_files.py:503, in get_data_patterns(base_path, download_config)
501 return _get_data_files_patterns(resolver)
502 except FileNotFoundError:
--> 503 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None
EmptyDatasetError: The directory at path/to/jsonl/file/metadata.jsonl doesn't contain any data files`
```
### Expected behavior
It should be able load the whole file in a format of "dataset" inside the dataset variable. But it gives error "The directory at "path/to/jsonl/metadata.jsonl" doesn't contain any data files."
### Environment info
I am using conda environment.
If you are trying to load your image dataset from a local folder, you should replace "data_dir=path/to/jsonl/metadata.jsonl" with the real folder path in your computer.
https://huggingface.co/docs/datasets/en/image_load#imagefolder | [
-0.016234397888183594,
0.07239946722984314,
-0.025297634303569794,
0.6795621514320374,
0.007368864491581917,
0.16623660922050476,
0.01661766692996025,
0.3690929114818573,
0.20603157579898834,
0.08262968063354492,
-0.02496563084423542,
0.1874658465385437,
-0.33143505454063416,
-0.0300496164... |
https://github.com/huggingface/datasets/issues/7005 | EmptyDatasetError: The directory at /metadata.jsonl doesn't contain any data files | Ah yes. My bad. I was giving file name. I should have given the folder directory as the path. That solved my issue. Thank you @albertvillanova and @lhoestq. | ### Describe the bug
while trying to load custom dataset from jsonl file, I get the error: "metadata.jsonl doesn't contain any data files"
### Steps to reproduce the bug
This is my [metadata_v2.jsonl](https://github.com/user-attachments/files/16016011/metadata_v2.json) file. I have this file in the folder with all images mentioned in that json(l) file.
Through below mentioned command I am trying to load_dataset so that I can upload it as mentioned here on the [official website](https://huggingface.co/docs/datasets/en/image_dataset#upload-dataset-to-the-hub).
````
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_dir="path/to/jsonl/metadata.jsonl")
````
error:
````
EmptyDatasetError Traceback (most recent call last)
Cell In[18], line 3
1 from datasets import load_dataset
----> 3 dataset = load_dataset("imagefolder",
4 data_dir="path/to/jsonl/file/metadata.jsonl")
5 dataset[0]["objects"]
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2594, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2589 verification_mode = VerificationMode(
2590 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
2591 )
2593 # Create a dataset builder
-> 2594 builder_instance = load_dataset_builder(
2595 path=path,
2596 name=name,
2597 data_dir=data_dir,
2598 data_files=data_files,
2599 cache_dir=cache_dir,
2600 features=features,
2601 download_config=download_config,
2602 download_mode=download_mode,
2603 revision=revision,
2604 token=token,
2605 storage_options=storage_options,
2606 trust_remote_code=trust_remote_code,
2607 _require_default_config_name=name is None,
2608 **config_kwargs,
2609 )
2611 # Return iterable dataset in case of streaming
2612 if streaming:
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2266, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)
2264 download_config = download_config.copy() if download_config else DownloadConfig()
2265 download_config.storage_options.update(storage_options)
-> 2266 dataset_module = dataset_module_factory(
2267 path,
2268 revision=revision,
2269 download_config=download_config,
2270 download_mode=download_mode,
2271 data_dir=data_dir,
2272 data_files=data_files,
2273 cache_dir=cache_dir,
2274 trust_remote_code=trust_remote_code,
2275 _require_default_config_name=_require_default_config_name,
2276 _require_custom_configs=bool(config_kwargs),
2277 )
2278 # Get dataset builder class from the processing script
2279 builder_kwargs = dataset_module.builder_kwargs
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1805, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)
1782 # We have several ways to get a dataset builder:
1783 #
1784 # - if path is the name of a packaged dataset module
(...)
1796
1797 # Try packaged
1798 if path in _PACKAGED_DATASETS_MODULES:
1799 return PackagedDatasetModuleFactory(
1800 path,
1801 data_dir=data_dir,
1802 data_files=data_files,
1803 download_config=download_config,
1804 download_mode=download_mode,
-> 1805 ).get_module()
1806 # Try locally
1807 elif path.endswith(filename):
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1140, in PackagedDatasetModuleFactory.get_module(self)
1135 def get_module(self) -> DatasetModule:
1136 base_path = Path(self.data_dir or "").expanduser().resolve().as_posix()
1137 patterns = (
1138 sanitize_patterns(self.data_files)
1139 if self.data_files is not None
-> 1140 else get_data_patterns(base_path, download_config=self.download_config)
1141 )
1142 data_files = DataFilesDict.from_patterns(
1143 patterns,
1144 download_config=self.download_config,
1145 base_path=base_path,
1146 )
1147 supports_metadata = self.name in _MODULE_SUPPORTS_METADATA
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/data_files.py:503, in get_data_patterns(base_path, download_config)
501 return _get_data_files_patterns(resolver)
502 except FileNotFoundError:
--> 503 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None
EmptyDatasetError: The directory at path/to/jsonl/file/metadata.jsonl doesn't contain any data files`
```
### Expected behavior
It should be able load the whole file in a format of "dataset" inside the dataset variable. But it gives error "The directory at "path/to/jsonl/metadata.jsonl" doesn't contain any data files."
### Environment info
I am using conda environment. | 28 | EmptyDatasetError: The directory at /metadata.jsonl doesn't contain any data files
### Describe the bug
while trying to load custom dataset from jsonl file, I get the error: "metadata.jsonl doesn't contain any data files"
### Steps to reproduce the bug
This is my [metadata_v2.jsonl](https://github.com/user-attachments/files/16016011/metadata_v2.json) file. I have this file in the folder with all images mentioned in that json(l) file.
Through below mentioned command I am trying to load_dataset so that I can upload it as mentioned here on the [official website](https://huggingface.co/docs/datasets/en/image_dataset#upload-dataset-to-the-hub).
````
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_dir="path/to/jsonl/metadata.jsonl")
````
error:
````
EmptyDatasetError Traceback (most recent call last)
Cell In[18], line 3
1 from datasets import load_dataset
----> 3 dataset = load_dataset("imagefolder",
4 data_dir="path/to/jsonl/file/metadata.jsonl")
5 dataset[0]["objects"]
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2594, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2589 verification_mode = VerificationMode(
2590 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
2591 )
2593 # Create a dataset builder
-> 2594 builder_instance = load_dataset_builder(
2595 path=path,
2596 name=name,
2597 data_dir=data_dir,
2598 data_files=data_files,
2599 cache_dir=cache_dir,
2600 features=features,
2601 download_config=download_config,
2602 download_mode=download_mode,
2603 revision=revision,
2604 token=token,
2605 storage_options=storage_options,
2606 trust_remote_code=trust_remote_code,
2607 _require_default_config_name=name is None,
2608 **config_kwargs,
2609 )
2611 # Return iterable dataset in case of streaming
2612 if streaming:
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2266, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)
2264 download_config = download_config.copy() if download_config else DownloadConfig()
2265 download_config.storage_options.update(storage_options)
-> 2266 dataset_module = dataset_module_factory(
2267 path,
2268 revision=revision,
2269 download_config=download_config,
2270 download_mode=download_mode,
2271 data_dir=data_dir,
2272 data_files=data_files,
2273 cache_dir=cache_dir,
2274 trust_remote_code=trust_remote_code,
2275 _require_default_config_name=_require_default_config_name,
2276 _require_custom_configs=bool(config_kwargs),
2277 )
2278 # Get dataset builder class from the processing script
2279 builder_kwargs = dataset_module.builder_kwargs
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1805, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)
1782 # We have several ways to get a dataset builder:
1783 #
1784 # - if path is the name of a packaged dataset module
(...)
1796
1797 # Try packaged
1798 if path in _PACKAGED_DATASETS_MODULES:
1799 return PackagedDatasetModuleFactory(
1800 path,
1801 data_dir=data_dir,
1802 data_files=data_files,
1803 download_config=download_config,
1804 download_mode=download_mode,
-> 1805 ).get_module()
1806 # Try locally
1807 elif path.endswith(filename):
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1140, in PackagedDatasetModuleFactory.get_module(self)
1135 def get_module(self) -> DatasetModule:
1136 base_path = Path(self.data_dir or "").expanduser().resolve().as_posix()
1137 patterns = (
1138 sanitize_patterns(self.data_files)
1139 if self.data_files is not None
-> 1140 else get_data_patterns(base_path, download_config=self.download_config)
1141 )
1142 data_files = DataFilesDict.from_patterns(
1143 patterns,
1144 download_config=self.download_config,
1145 base_path=base_path,
1146 )
1147 supports_metadata = self.name in _MODULE_SUPPORTS_METADATA
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/data_files.py:503, in get_data_patterns(base_path, download_config)
501 return _get_data_files_patterns(resolver)
502 except FileNotFoundError:
--> 503 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None
EmptyDatasetError: The directory at path/to/jsonl/file/metadata.jsonl doesn't contain any data files`
```
### Expected behavior
It should be able load the whole file in a format of "dataset" inside the dataset variable. But it gives error "The directory at "path/to/jsonl/metadata.jsonl" doesn't contain any data files."
### Environment info
I am using conda environment.
Ah yes. My bad. I was giving file name. I should have given the folder directory as the path. That solved my issue. Thank you @albertvillanova and @lhoestq. | [
-0.016234397888183594,
0.07239946722984314,
-0.025297634303569794,
0.6795621514320374,
0.007368864491581917,
0.16623660922050476,
0.01661766692996025,
0.3690929114818573,
0.20603157579898834,
0.08262968063354492,
-0.02496563084423542,
0.1874658465385437,
-0.33143505454063416,
-0.0300496164... |
https://github.com/huggingface/datasets/issues/7001 | Datasetbuilder Local Download FileNotFoundError | Ok it seems the solution is to use the directory string without the trailing "/" which in my case as:
`parquet_dir = "~/data/Parquet" `
Still i think this is a weird behavior... | ### Describe the bug
So I was trying to download a dataset and save it as parquet and I follow the [tutorial](https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage) of Huggingface. However, during the excution I face a FileNotFoundError.
I debug the code and it seems there is a bug there:
So first it creates a .incomplete folder and before moving its contents the following code deletes the directory
[Code](https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/builder.py#L984)
hence as a result I face with:
``` FileNotFoundError: [Errno 2] No such file or directory: '~/data/Parquet/.incomplete '```
### Steps to reproduce the bug
```
from datasets import load_dataset_builder
from pathlib import Path
parquet_dir = "~/data/Parquet/"
Path(parquet_dir).mkdir(parents=True, exist_ok=True)
builder = load_dataset_builder(
"rotten_tomatoes",
)
builder.download_and_prepare(parquet_dir, file_format="parquet")
```
### Expected behavior
Downloads the files and saves as parquet
### Environment info
Ubuntu,
Python 3.10
```
datasets 2.19.1
``` | 32 | Datasetbuilder Local Download FileNotFoundError
### Describe the bug
So I was trying to download a dataset and save it as parquet and I follow the [tutorial](https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage) of Huggingface. However, during the excution I face a FileNotFoundError.
I debug the code and it seems there is a bug there:
So first it creates a .incomplete folder and before moving its contents the following code deletes the directory
[Code](https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/builder.py#L984)
hence as a result I face with:
``` FileNotFoundError: [Errno 2] No such file or directory: '~/data/Parquet/.incomplete '```
### Steps to reproduce the bug
```
from datasets import load_dataset_builder
from pathlib import Path
parquet_dir = "~/data/Parquet/"
Path(parquet_dir).mkdir(parents=True, exist_ok=True)
builder = load_dataset_builder(
"rotten_tomatoes",
)
builder.download_and_prepare(parquet_dir, file_format="parquet")
```
### Expected behavior
Downloads the files and saves as parquet
### Environment info
Ubuntu,
Python 3.10
```
datasets 2.19.1
```
Ok it seems the solution is to use the directory string without the trailing "/" which in my case as:
`parquet_dir = "~/data/Parquet" `
Still i think this is a weird behavior... | [
-0.09115815162658691,
0.007688999176025391,
0.09595905244350433,
0.43190717697143555,
0.4101852774620056,
-0.025620467960834503,
-0.22146721184253693,
-0.05176712945103645,
-0.20363929867744446,
0.24639157950878143,
0.09236932545900345,
0.071162648499012,
-0.008753609843552113,
-0.00749091... |
https://github.com/huggingface/datasets/issues/6995 | ImportError when importing datasets.load_dataset | What is the version of your installed `huggingface-hub`:
```python
import huggingface_hub
print(huggingface_hub.__version__)
```
It seems you have a very old version of `huggingface-hub`, where `CommitInfo` was not still implemented. You need to update it:
```
pip install -U huggingface-hub
```
Note that `CommitInfo` was implemented in huggingface-hub 0.10.0 and datasets requires "huggingface-hub>=0.21.2" | ### Describe the bug
I encountered an ImportError while trying to import `load_dataset` from the `datasets` module in Hugging Face. The error message indicates a problem with importing 'CommitInfo' from 'huggingface_hub'.
### Steps to reproduce the bug
1. pip install git+https://github.com/huggingface/datasets
2. from datasets import load_dataset
### Expected behavior
ImportError Traceback (most recent call last)
Cell In[7], [line 1](vscode-notebook-cell:?execution_count=7&line=1)
----> [1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset
[3](vscode-notebook-cell:?execution_count=7&line=3) train_set = load_dataset("mispeech/speechocean762", split="train")
[4](vscode-notebook-cell:?execution_count=7&line=4) test_set = load_dataset("mispeech/speechocean762", split="test")
File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py:[1](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:1)7
1 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
[2](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:2) #
[3](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:3) # Licensed under the Apache License, Version 2.0 (the "License");
(...)
[12](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:12) # See the License for the specific language governing permissions and
[13](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:13) # limitations under the License.
[15](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:15) __version__ = "2.20.1.dev0"
---> [17](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:17) from .arrow_dataset import Dataset
[18](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:18) from .arrow_reader import ReadInstruction
[19](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:19) from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py:63
[61](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:61) import pyarrow.compute as pc
[62](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:62) from fsspec.core import url_to_fs
---> [63](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:63) from huggingface_hub import (
[64](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:64) CommitInfo,
[65](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:65) CommitOperationAdd,
...
[70](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:70) )
[71](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:71) from huggingface_hub.hf_api import RepoFile
[72](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:72) from multiprocess import Pool
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (d:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py)
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?580889ab-0f61-4f37-9214-eaa2b3807f85) or open in a [text editor](command:workbench.action.openLargeOutput?580889ab-0f61-4f37-9214-eaa2b3807f85). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Environment info
Leo@DESKTOP-9NHUAMI MSYS /d/Anaconda3/envs/CS224S/Lib/site-packages/huggingface_hub
$ datasets-cli env
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "D:\Anaconda3\envs\CS224S\Scripts\datasets-cli.exe\__main__.py", line 4, in <module>
File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py", line 17, in <module>
from .arrow_dataset import Dataset
File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py", line 63, in <module>
from huggingface_hub import (
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py)
(CS224S) | 52 | ImportError when importing datasets.load_dataset
### Describe the bug
I encountered an ImportError while trying to import `load_dataset` from the `datasets` module in Hugging Face. The error message indicates a problem with importing 'CommitInfo' from 'huggingface_hub'.
### Steps to reproduce the bug
1. pip install git+https://github.com/huggingface/datasets
2. from datasets import load_dataset
### Expected behavior
ImportError Traceback (most recent call last)
Cell In[7], [line 1](vscode-notebook-cell:?execution_count=7&line=1)
----> [1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset
[3](vscode-notebook-cell:?execution_count=7&line=3) train_set = load_dataset("mispeech/speechocean762", split="train")
[4](vscode-notebook-cell:?execution_count=7&line=4) test_set = load_dataset("mispeech/speechocean762", split="test")
File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py:[1](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:1)7
1 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
[2](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:2) #
[3](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:3) # Licensed under the Apache License, Version 2.0 (the "License");
(...)
[12](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:12) # See the License for the specific language governing permissions and
[13](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:13) # limitations under the License.
[15](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:15) __version__ = "2.20.1.dev0"
---> [17](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:17) from .arrow_dataset import Dataset
[18](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:18) from .arrow_reader import ReadInstruction
[19](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:19) from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py:63
[61](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:61) import pyarrow.compute as pc
[62](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:62) from fsspec.core import url_to_fs
---> [63](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:63) from huggingface_hub import (
[64](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:64) CommitInfo,
[65](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:65) CommitOperationAdd,
...
[70](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:70) )
[71](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:71) from huggingface_hub.hf_api import RepoFile
[72](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:72) from multiprocess import Pool
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (d:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py)
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?580889ab-0f61-4f37-9214-eaa2b3807f85) or open in a [text editor](command:workbench.action.openLargeOutput?580889ab-0f61-4f37-9214-eaa2b3807f85). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Environment info
Leo@DESKTOP-9NHUAMI MSYS /d/Anaconda3/envs/CS224S/Lib/site-packages/huggingface_hub
$ datasets-cli env
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "D:\Anaconda3\envs\CS224S\Scripts\datasets-cli.exe\__main__.py", line 4, in <module>
File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py", line 17, in <module>
from .arrow_dataset import Dataset
File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py", line 63, in <module>
from huggingface_hub import (
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py)
(CS224S)
What is the version of your installed `huggingface-hub`:
```python
import huggingface_hub
print(huggingface_hub.__version__)
```
It seems you have a very old version of `huggingface-hub`, where `CommitInfo` was not still implemented. You need to update it:
```
pip install -U huggingface-hub
```
Note that `CommitInfo` was implemented in huggingface-hub 0.10.0 and datasets requires "huggingface-hub>=0.21.2" | [
-0.27894434332847595,
-0.16736024618148804,
-0.019220074638724327,
0.3342856764793396,
0.27266108989715576,
0.03040923923254013,
0.09082164615392685,
0.2282353639602661,
0.17126117646694183,
0.17551535367965698,
-0.1372210830450058,
-0.05400799587368965,
-0.09396711736917496,
0.36028906702... |
https://github.com/huggingface/datasets/issues/6995 | ImportError when importing datasets.load_dataset | The error message says there is no CommitInfo in your installed huggingface-hub library:
```
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub_init_.py)
```
And this is implemented since version 0.10.0:
- https://github.com/huggingface/huggingface_hub/pull/1066 | ### Describe the bug
I encountered an ImportError while trying to import `load_dataset` from the `datasets` module in Hugging Face. The error message indicates a problem with importing 'CommitInfo' from 'huggingface_hub'.
### Steps to reproduce the bug
1. pip install git+https://github.com/huggingface/datasets
2. from datasets import load_dataset
### Expected behavior
ImportError Traceback (most recent call last)
Cell In[7], [line 1](vscode-notebook-cell:?execution_count=7&line=1)
----> [1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset
[3](vscode-notebook-cell:?execution_count=7&line=3) train_set = load_dataset("mispeech/speechocean762", split="train")
[4](vscode-notebook-cell:?execution_count=7&line=4) test_set = load_dataset("mispeech/speechocean762", split="test")
File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py:[1](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:1)7
1 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
[2](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:2) #
[3](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:3) # Licensed under the Apache License, Version 2.0 (the "License");
(...)
[12](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:12) # See the License for the specific language governing permissions and
[13](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:13) # limitations under the License.
[15](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:15) __version__ = "2.20.1.dev0"
---> [17](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:17) from .arrow_dataset import Dataset
[18](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:18) from .arrow_reader import ReadInstruction
[19](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:19) from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py:63
[61](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:61) import pyarrow.compute as pc
[62](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:62) from fsspec.core import url_to_fs
---> [63](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:63) from huggingface_hub import (
[64](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:64) CommitInfo,
[65](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:65) CommitOperationAdd,
...
[70](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:70) )
[71](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:71) from huggingface_hub.hf_api import RepoFile
[72](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:72) from multiprocess import Pool
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (d:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py)
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?580889ab-0f61-4f37-9214-eaa2b3807f85) or open in a [text editor](command:workbench.action.openLargeOutput?580889ab-0f61-4f37-9214-eaa2b3807f85). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Environment info
Leo@DESKTOP-9NHUAMI MSYS /d/Anaconda3/envs/CS224S/Lib/site-packages/huggingface_hub
$ datasets-cli env
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "D:\Anaconda3\envs\CS224S\Scripts\datasets-cli.exe\__main__.py", line 4, in <module>
File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py", line 17, in <module>
from .arrow_dataset import Dataset
File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py", line 63, in <module>
from huggingface_hub import (
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py)
(CS224S) | 32 | ImportError when importing datasets.load_dataset
### Describe the bug
I encountered an ImportError while trying to import `load_dataset` from the `datasets` module in Hugging Face. The error message indicates a problem with importing 'CommitInfo' from 'huggingface_hub'.
### Steps to reproduce the bug
1. pip install git+https://github.com/huggingface/datasets
2. from datasets import load_dataset
### Expected behavior
ImportError Traceback (most recent call last)
Cell In[7], [line 1](vscode-notebook-cell:?execution_count=7&line=1)
----> [1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset
[3](vscode-notebook-cell:?execution_count=7&line=3) train_set = load_dataset("mispeech/speechocean762", split="train")
[4](vscode-notebook-cell:?execution_count=7&line=4) test_set = load_dataset("mispeech/speechocean762", split="test")
File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py:[1](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:1)7
1 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
[2](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:2) #
[3](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:3) # Licensed under the Apache License, Version 2.0 (the "License");
(...)
[12](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:12) # See the License for the specific language governing permissions and
[13](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:13) # limitations under the License.
[15](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:15) __version__ = "2.20.1.dev0"
---> [17](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:17) from .arrow_dataset import Dataset
[18](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:18) from .arrow_reader import ReadInstruction
[19](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:19) from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py:63
[61](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:61) import pyarrow.compute as pc
[62](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:62) from fsspec.core import url_to_fs
---> [63](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:63) from huggingface_hub import (
[64](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:64) CommitInfo,
[65](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:65) CommitOperationAdd,
...
[70](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:70) )
[71](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:71) from huggingface_hub.hf_api import RepoFile
[72](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:72) from multiprocess import Pool
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (d:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py)
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?580889ab-0f61-4f37-9214-eaa2b3807f85) or open in a [text editor](command:workbench.action.openLargeOutput?580889ab-0f61-4f37-9214-eaa2b3807f85). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Environment info
Leo@DESKTOP-9NHUAMI MSYS /d/Anaconda3/envs/CS224S/Lib/site-packages/huggingface_hub
$ datasets-cli env
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "D:\Anaconda3\envs\CS224S\Scripts\datasets-cli.exe\__main__.py", line 4, in <module>
File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py", line 17, in <module>
from .arrow_dataset import Dataset
File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py", line 63, in <module>
from huggingface_hub import (
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py)
(CS224S)
The error message says there is no CommitInfo in your installed huggingface-hub library:
```
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub_init_.py)
```
And this is implemented since version 0.10.0:
- https://github.com/huggingface/huggingface_hub/pull/1066 | [
-0.27894434332847595,
-0.16736024618148804,
-0.019220074638724327,
0.3342856764793396,
0.27266108989715576,
0.03040923923254013,
0.09082164615392685,
0.2282353639602661,
0.17126117646694183,
0.17551535367965698,
-0.1372210830450058,
-0.05400799587368965,
-0.09396711736917496,
0.36028906702... |
https://github.com/huggingface/datasets/issues/6992 | Dataset with streaming doesn't work with proxy | Hi ! can you try updating `datasets` and `huggingface_hub` ?
```
pip install -U datasets huggingface_hub
``` | ### Describe the bug
I'm currently trying to stream data using dataset since the dataset is too big but it hangs indefinitely without loading the first batch. I use AIMOS which is a supercomputer that uses proxy to connect to the internet. I assume it has to do with the network configurations. I've already set up both HTTP_PROXY and HTTPS_PROXY. streaming = False works fine.
### Steps to reproduce the bug
use load_dataset with streaming = True in AIMOS
### Expected behavior
does not hang indefinitely and loads batches to start training run
### Environment info
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
_pytorch_select 2.0 cuda_2 https://ftp.osuosl.org/pub/open-ce/1.10.0
abseil-cpp 20220623.0 h9888cd1_6 conda-forge
absl-py 1.0.0 py311h399429b_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
aiofiles 23.2.1 pyhd8ed1ab_0 conda-forge
aiohttp 3.8.6 py311hf118e41_0
aiosignal 1.2.0 pyhd3eb1b0_0
archspec 0.2.3 pyhd8ed1ab_0 conda-forge
arrow-cpp 11.0.0 ha3edaa6_5_cpu conda-forge
async-timeout 4.0.2 py311h6ffa863_0
attrs 23.1.0 py311h6ffa863_0
av 10.0.0 py311he6153ed_2 https://ftp.osuosl.org/pub/open-ce/1.10.0
aws-c-auth 0.6.24 hb81f6d7_5 conda-forge
aws-c-cal 0.5.20 h3c2b4d9_6 conda-forge
aws-c-common 0.8.11 h4194056_0 conda-forge
aws-c-compression 0.2.16 ha19333d_3 conda-forge
aws-c-event-stream 0.2.18 h12a9399_6 conda-forge
aws-c-http 0.7.4 ha2cde00_2 conda-forge
aws-c-io 0.13.17 h9189062_2 conda-forge
aws-c-mqtt 0.8.6 h40d1a04_6 conda-forge
aws-c-s3 0.2.4 hbdbe4f0_3 conda-forge
aws-c-sdkutils 0.1.7 ha19333d_3 conda-forge
aws-checksums 0.1.14 ha19333d_3 conda-forge
aws-crt-cpp 0.19.7 hd018011_7 conda-forge
aws-sdk-cpp 1.10.57 hb9575ba_4 conda-forge
blas 1.0 openblas
blinker 1.8.2 pyhd8ed1ab_0 conda-forge
boltons 23.0.0 py311h6ffa863_0
boost-cpp 1.82.0 h25e6d66_2
bottleneck 1.3.5 py311h34f6284_0
brotli 1.0.9 hf118e41_7
brotli-bin 1.0.9 hf118e41_7
brotli-python 1.0.9 py311h4a02239_7
bzip2 1.0.8 h7b6447c_0
c-ares 1.19.1 hf118e41_0
ca-certificates 2024.6.2 h0f6029e_0 conda-forge
cachetools 5.3.3 pyhd8ed1ab_0 conda-forge
certifi 2024.6.2 pyhd8ed1ab_0 conda-forge
cffi 1.15.1 py311hf118e41_3
charset-normalizer 2.0.4 pyhd3eb1b0_0
click 8.1.7 unix_pyh707e725_0 conda-forge
conda 24.5.0 py311h1af927a_0 conda-forge
conda-content-trust 0.2.0 py311h6ffa863_0
conda-libmamba-solver 23.11.1 py311h6ffa863_0
conda-package-handling 2.2.0 py311h6ffa863_0
conda-package-streaming 0.9.0 py311h6ffa863_0
contourpy 1.0.5 py311h25e6d66_0
cryptography 41.0.3 py311hb0e80e7_0
cudatoolkit 11.8.0 hedcfb66_13 conda-forge
cudnn 8.9.2_11.8 h9ceb136_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
cycler 0.11.0 pyhd3eb1b0_0
datasets 2.12.0 py311h6ffa863_0
dill 0.3.6 py311h6ffa863_0
distro 1.9.0 pyhd8ed1ab_0 conda-forge
ffmpeg 4.2.2 opence_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
filelock 3.9.0 py311h6ffa863_0
fmt 9.1.0 h25e6d66_0
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.12.1 hd23a775_0
frozendict 2.4.4 py311hb02d432_0 conda-forge
frozenlist 1.4.0 py311hf118e41_0
fsspec 2023.9.2 py311h6ffa863_0
gflags 2.2.2 he6710b0_0
giflib 5.2.1 hf118e41_3
glog 0.6.0 hbe088e0_0 conda-forge
gmp 6.3.0 h46f38da_0 conda-forge
gmpy2 2.1.5 py311h2758da7_1 conda-forge
google-auth 2.30.0 pyhff2d567_0 conda-forge
google-auth-oauthlib 0.5.3 pyhd8ed1ab_0 conda-forge
grpc-cpp 1.51.1 h8ba971d_1 conda-forge
grpcio 1.54.3 py311h414e0d3_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
huggingface_hub 0.17.3 py311h6ffa863_0
icu 73.1 h4a02239_0
idna 3.4 py311h6ffa863_0
importlib-metadata 6.0.0 py311h6ffa863_0
jinja2 3.1.4 pyhd8ed1ab_0 conda-forge
jpeg 9e hf118e41_1
jsonpatch 1.32 pyhd3eb1b0_0
jsonpointer 2.1 pyhd3eb1b0_0
kiwisolver 1.4.4 py311h4a02239_0
krb5 1.20.1 hc019ccd_1
lame 3.100 hb283c62_1003 conda-forge
lcms2 2.12 h2045e0b_0
ld_impl_linux-ppc64le 2.38 hec883e6_1
lerc 3.0 h29c3540_0
leveldb 1.23 h24532b4_1 conda-forge
libabseil 20220623.0 cxx17_h9235812_6 conda-forge
libarchive 3.6.2 hd8ab008_2
libarrow 11.0.0 h837770b_5_cpu conda-forge
libboost 1.82.0 haf51a6a_2
libbrotlicommon 1.0.9 hf118e41_7
libbrotlidec 1.0.9 hf118e41_7
libbrotlienc 1.0.9 hf118e41_7
libcrc32c 1.1.2 h3b9df90_0 conda-forge
libcurl 8.4.0 h4d62439_0
libdeflate 1.17 hf118e41_1
libedit 3.1.20221030 hf118e41_0
libev 4.33 h140841e_1
libevent 2.1.10 h19c23f1_4 conda-forge
libexpat 2.6.2 h46f38da_0 conda-forge
libffi 3.4.4 h4a02239_0
libgcc-ng 13.2.0 h31e42bb_10 conda-forge
libgfortran-ng 11.2.0 hb3889a9_1
libgfortran5 11.2.0 h1234567_1
libgomp 13.2.0 h31e42bb_10 conda-forge
libgoogle-cloud 2.7.0 h11140b6_1 conda-forge
libgrpc 1.51.1 h4d29a31_1 conda-forge
libmamba 1.5.3 h7c6fafd_0
libmambapy 1.5.3 py311h828bf7b_0
libnghttp2 1.57.0 h44e5816_0
libnsl 2.0.1 ha17a0cc_0 conda-forge
libopenblas 0.3.23 hc5a31fb_2 https://ftp.osuosl.org/pub/open-ce/1.10.0
libopus 1.3.1 h4e0d66e_1 conda-forge
libpng 1.6.39 hf118e41_0
libprotobuf 3.21.12 h1776448_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
libsolv 0.7.24 h0f529ac_0
libsqlite 3.45.3 hd4bbf49_0 conda-forge
libssh2 1.10.0 h50fa78f_2
libstdcxx-ng 13.2.0 h262982c_10 conda-forge
libthrift 0.18.0 h82f1162_0 conda-forge
libtiff 4.5.1 h4a02239_0
libutf8proc 2.8.0 hb283c62_0 conda-forge
libuuid 2.38.1 h4194056_0 conda-forge
libvpx 1.13.1 h46f38da_0 conda-forge
libwebp 1.3.2 h0f96ee2_0
libwebp-base 1.3.2 hf118e41_0
libxcrypt 4.4.36 ha17a0cc_1 conda-forge
libxml2 2.10.4 h18e3229_1
libzlib 1.2.13 h1f2b957_6 conda-forge
llvm-openmp 14.0.6 hc028133_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
lmdb 0.9.31 ha17a0cc_1 conda-forge
lz4-c 1.9.4 h4a02239_0
markdown 3.4.4 pyhd8ed1ab_0 conda-forge
markupsafe 2.1.5 py311h32d8acf_0 conda-forge
matplotlib 3.8.0 py311h6ffa863_0
matplotlib-base 3.8.0 py311h52e1fcc_0
menuinst 2.1.1 py311h1af927a_0 conda-forge
mpc 1.3.1 heaf1863_0 conda-forge
mpfr 4.2.1 haad2271_1 conda-forge
mpmath 1.3.0 pyhd8ed1ab_0 conda-forge
multidict 6.0.2 py311hf118e41_0
multiprocess 0.70.14 py311h6ffa863_0
munkres 1.1.4 py_0
mypy_extensions 1.0.0 pyha770c72_0 conda-forge
nccl 2.18.3 cuda11.8_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
ncurses 6.4 h4a02239_0
nest-asyncio 1.6.0 pyhd8ed1ab_0 conda-forge
networkx 2.8.8 pyhd8ed1ab_0 conda-forge
nomkl 3.0 0 https://ftp.osuosl.org/pub/open-ce/1.10.0
numactl 2.0.16 hba61f60_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
numexpr 2.8.7 py311hc46fc55_0
numpy 1.24.3 py311h148a09e_0
numpy-base 1.24.3 py311h06b82f6_0
oauthlib 3.2.2 pyhd8ed1ab_0 conda-forge
openjpeg 2.4.0 hfe35807_0
openssl 3.3.1 h1f2b957_0 conda-forge
orc 1.8.2 h341c9a4_2 conda-forge
packaging 23.1 py311h6ffa863_0
pandas 2.1.1 py311h52e1fcc_0
pcre2 10.42 h280155c_0
pillow 10.0.1 py311he33076b_0
pip 23.3 py311h6ffa863_0
platformdirs 4.2.2 pyhd8ed1ab_0 conda-forge
pluggy 1.0.0 py311h6ffa863_1
pooch 1.8.2 pyhd8ed1ab_0 conda-forge
protobuf 4.21.12 py311ha7baec7_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
psutil 5.9.8 py311hd26027c_0 conda-forge
pyarrow 11.0.0 py311h04a18d5_1
pyasn1 0.6.0 pyhd8ed1ab_0 conda-forge
pyasn1-modules 0.4.0 pyhd8ed1ab_0 conda-forge
pybind11-abi 4 hd3eb1b0_1
pycosat 0.6.6 py311hf118e41_0
pycparser 2.21 pyhd3eb1b0_0
pyjwt 2.8.0 pyhd8ed1ab_1 conda-forge
pyopenssl 23.2.0 py311h6ffa863_0
pyparsing 3.0.9 py311h6ffa863_0
pyre-extensions 0.0.30 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 py311h6ffa863_0
python 3.11.8 h3332dee_0_cpython conda-forge
python-dateutil 2.8.2 pyhd3eb1b0_0
python-tzdata 2023.3 pyhd3eb1b0_0
python-xxhash 2.0.2 py311hf118e41_1
python_abi 3.11 4_cp311 conda-forge
pytorch 2.0.1 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
pytorch-base 2.0.1 cuda11.8_py311_pb4.21.12_4 https://ftp.osuosl.org/pub/open-ce/1.10.0
pytz 2023.3.post1 py311h6ffa863_0
pyu2f 0.1.5 pyhd8ed1ab_0 conda-forge
pyyaml 6.0.1 py311hf118e41_0
re2 2023.02.01 h883269e_0 conda-forge
readline 8.2 hf118e41_0
regex 2023.10.3 py311hf118e41_0
reproc 14.2.4 h29c3540_1
reproc-cpp 14.2.4 h29c3540_1
requests 2.31.0 py311h6ffa863_0
requests-oauthlib 2.0.0 pyhd8ed1ab_0 conda-forge
responses 0.13.3 pyhd3eb1b0_0
rsa 4.9 pyhd8ed1ab_0 conda-forge
ruamel.yaml 0.17.21 py311hf118e41_0
s2n 1.3.37 h5e47323_0 conda-forge
safetensors 0.4.0 py311hda16d9e_0
scipy 1.11.1 py311hd69e9bb_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
sentencepiece 0.1.97 h1e74c73_py311_pb4.21.12_2 https://ftp.osuosl.org/pub/open-ce/1.10.0
setuptools 68.0.0 py311h6ffa863_0
six 1.16.0 pyhd3eb1b0_1
snappy 1.1.9 h29c3540_0
sqlite 3.41.2 hf118e41_0
sympy 1.12.1 pypyh2585a3b_103 conda-forge
tabulate 0.8.10 pyhd8ed1ab_0 conda-forge
tensorboard 2.13.0 pyhab0730d_pb4.21.12_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
tensorboard-data-server 0.7.0 pyh6f84499_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
tensorboard-plugin-wit 1.6.0 pyh9f0ad1d_0 conda-forge
tk 8.6.13 hd4bbf49_0 conda-forge
tokenizers 0.13.3 py311h3d4f45a_0
torchdata 0.6.0 py311_2 https://ftp.osuosl.org/pub/open-ce/1.10.0
torchsnapshot 0.1.0 pyhd8ed1ab_0 conda-forge
torchtext-base 0.15.2 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
torchtnt 0.2.4 pyhd8ed1ab_0 conda-forge
torchvision-base 0.15.2 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
tornado 6.3.3 py311hf118e41_0
tqdm 4.65.0 py311h7837921_0
transformers 4.32.1 py311h6ffa863_0
truststore 0.8.0 py311h6ffa863_0
typing-extensions 4.7.1 py311h6ffa863_0
typing_extensions 4.7.1 py311h6ffa863_0
typing_inspect 0.9.0 pyhd8ed1ab_0 conda-forge
tzdata 2023c h04d1e81_0
urllib3 1.26.18 py311h6ffa863_0
utf8proc 2.6.1 h140841e_0
werkzeug 2.3.8 pyhd8ed1ab_0 conda-forge
wheel 0.41.2 py311h6ffa863_0
xxhash 0.8.0 h140841e_3
xz 5.4.2 hf118e41_0
yaml 0.2.5 h7b6447c_0
yaml-cpp 0.8.0 h4a02239_0
yarl 1.8.1 py311hf118e41_0
zipp 3.11.0 py311h6ffa863_0
zlib 1.2.13 h1f2b957_6 conda-forge
zstandard 0.19.0 py311hf118e41_0
zstd 1.5.5 h57e4825_0 | 17 | Dataset with streaming doesn't work with proxy
### Describe the bug
I'm currently trying to stream data using dataset since the dataset is too big but it hangs indefinitely without loading the first batch. I use AIMOS which is a supercomputer that uses proxy to connect to the internet. I assume it has to do with the network configurations. I've already set up both HTTP_PROXY and HTTPS_PROXY. streaming = False works fine.
### Steps to reproduce the bug
use load_dataset with streaming = True in AIMOS
### Expected behavior
does not hang indefinitely and loads batches to start training run
### Environment info
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
_pytorch_select 2.0 cuda_2 https://ftp.osuosl.org/pub/open-ce/1.10.0
abseil-cpp 20220623.0 h9888cd1_6 conda-forge
absl-py 1.0.0 py311h399429b_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
aiofiles 23.2.1 pyhd8ed1ab_0 conda-forge
aiohttp 3.8.6 py311hf118e41_0
aiosignal 1.2.0 pyhd3eb1b0_0
archspec 0.2.3 pyhd8ed1ab_0 conda-forge
arrow-cpp 11.0.0 ha3edaa6_5_cpu conda-forge
async-timeout 4.0.2 py311h6ffa863_0
attrs 23.1.0 py311h6ffa863_0
av 10.0.0 py311he6153ed_2 https://ftp.osuosl.org/pub/open-ce/1.10.0
aws-c-auth 0.6.24 hb81f6d7_5 conda-forge
aws-c-cal 0.5.20 h3c2b4d9_6 conda-forge
aws-c-common 0.8.11 h4194056_0 conda-forge
aws-c-compression 0.2.16 ha19333d_3 conda-forge
aws-c-event-stream 0.2.18 h12a9399_6 conda-forge
aws-c-http 0.7.4 ha2cde00_2 conda-forge
aws-c-io 0.13.17 h9189062_2 conda-forge
aws-c-mqtt 0.8.6 h40d1a04_6 conda-forge
aws-c-s3 0.2.4 hbdbe4f0_3 conda-forge
aws-c-sdkutils 0.1.7 ha19333d_3 conda-forge
aws-checksums 0.1.14 ha19333d_3 conda-forge
aws-crt-cpp 0.19.7 hd018011_7 conda-forge
aws-sdk-cpp 1.10.57 hb9575ba_4 conda-forge
blas 1.0 openblas
blinker 1.8.2 pyhd8ed1ab_0 conda-forge
boltons 23.0.0 py311h6ffa863_0
boost-cpp 1.82.0 h25e6d66_2
bottleneck 1.3.5 py311h34f6284_0
brotli 1.0.9 hf118e41_7
brotli-bin 1.0.9 hf118e41_7
brotli-python 1.0.9 py311h4a02239_7
bzip2 1.0.8 h7b6447c_0
c-ares 1.19.1 hf118e41_0
ca-certificates 2024.6.2 h0f6029e_0 conda-forge
cachetools 5.3.3 pyhd8ed1ab_0 conda-forge
certifi 2024.6.2 pyhd8ed1ab_0 conda-forge
cffi 1.15.1 py311hf118e41_3
charset-normalizer 2.0.4 pyhd3eb1b0_0
click 8.1.7 unix_pyh707e725_0 conda-forge
conda 24.5.0 py311h1af927a_0 conda-forge
conda-content-trust 0.2.0 py311h6ffa863_0
conda-libmamba-solver 23.11.1 py311h6ffa863_0
conda-package-handling 2.2.0 py311h6ffa863_0
conda-package-streaming 0.9.0 py311h6ffa863_0
contourpy 1.0.5 py311h25e6d66_0
cryptography 41.0.3 py311hb0e80e7_0
cudatoolkit 11.8.0 hedcfb66_13 conda-forge
cudnn 8.9.2_11.8 h9ceb136_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
cycler 0.11.0 pyhd3eb1b0_0
datasets 2.12.0 py311h6ffa863_0
dill 0.3.6 py311h6ffa863_0
distro 1.9.0 pyhd8ed1ab_0 conda-forge
ffmpeg 4.2.2 opence_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
filelock 3.9.0 py311h6ffa863_0
fmt 9.1.0 h25e6d66_0
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.12.1 hd23a775_0
frozendict 2.4.4 py311hb02d432_0 conda-forge
frozenlist 1.4.0 py311hf118e41_0
fsspec 2023.9.2 py311h6ffa863_0
gflags 2.2.2 he6710b0_0
giflib 5.2.1 hf118e41_3
glog 0.6.0 hbe088e0_0 conda-forge
gmp 6.3.0 h46f38da_0 conda-forge
gmpy2 2.1.5 py311h2758da7_1 conda-forge
google-auth 2.30.0 pyhff2d567_0 conda-forge
google-auth-oauthlib 0.5.3 pyhd8ed1ab_0 conda-forge
grpc-cpp 1.51.1 h8ba971d_1 conda-forge
grpcio 1.54.3 py311h414e0d3_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
huggingface_hub 0.17.3 py311h6ffa863_0
icu 73.1 h4a02239_0
idna 3.4 py311h6ffa863_0
importlib-metadata 6.0.0 py311h6ffa863_0
jinja2 3.1.4 pyhd8ed1ab_0 conda-forge
jpeg 9e hf118e41_1
jsonpatch 1.32 pyhd3eb1b0_0
jsonpointer 2.1 pyhd3eb1b0_0
kiwisolver 1.4.4 py311h4a02239_0
krb5 1.20.1 hc019ccd_1
lame 3.100 hb283c62_1003 conda-forge
lcms2 2.12 h2045e0b_0
ld_impl_linux-ppc64le 2.38 hec883e6_1
lerc 3.0 h29c3540_0
leveldb 1.23 h24532b4_1 conda-forge
libabseil 20220623.0 cxx17_h9235812_6 conda-forge
libarchive 3.6.2 hd8ab008_2
libarrow 11.0.0 h837770b_5_cpu conda-forge
libboost 1.82.0 haf51a6a_2
libbrotlicommon 1.0.9 hf118e41_7
libbrotlidec 1.0.9 hf118e41_7
libbrotlienc 1.0.9 hf118e41_7
libcrc32c 1.1.2 h3b9df90_0 conda-forge
libcurl 8.4.0 h4d62439_0
libdeflate 1.17 hf118e41_1
libedit 3.1.20221030 hf118e41_0
libev 4.33 h140841e_1
libevent 2.1.10 h19c23f1_4 conda-forge
libexpat 2.6.2 h46f38da_0 conda-forge
libffi 3.4.4 h4a02239_0
libgcc-ng 13.2.0 h31e42bb_10 conda-forge
libgfortran-ng 11.2.0 hb3889a9_1
libgfortran5 11.2.0 h1234567_1
libgomp 13.2.0 h31e42bb_10 conda-forge
libgoogle-cloud 2.7.0 h11140b6_1 conda-forge
libgrpc 1.51.1 h4d29a31_1 conda-forge
libmamba 1.5.3 h7c6fafd_0
libmambapy 1.5.3 py311h828bf7b_0
libnghttp2 1.57.0 h44e5816_0
libnsl 2.0.1 ha17a0cc_0 conda-forge
libopenblas 0.3.23 hc5a31fb_2 https://ftp.osuosl.org/pub/open-ce/1.10.0
libopus 1.3.1 h4e0d66e_1 conda-forge
libpng 1.6.39 hf118e41_0
libprotobuf 3.21.12 h1776448_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
libsolv 0.7.24 h0f529ac_0
libsqlite 3.45.3 hd4bbf49_0 conda-forge
libssh2 1.10.0 h50fa78f_2
libstdcxx-ng 13.2.0 h262982c_10 conda-forge
libthrift 0.18.0 h82f1162_0 conda-forge
libtiff 4.5.1 h4a02239_0
libutf8proc 2.8.0 hb283c62_0 conda-forge
libuuid 2.38.1 h4194056_0 conda-forge
libvpx 1.13.1 h46f38da_0 conda-forge
libwebp 1.3.2 h0f96ee2_0
libwebp-base 1.3.2 hf118e41_0
libxcrypt 4.4.36 ha17a0cc_1 conda-forge
libxml2 2.10.4 h18e3229_1
libzlib 1.2.13 h1f2b957_6 conda-forge
llvm-openmp 14.0.6 hc028133_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
lmdb 0.9.31 ha17a0cc_1 conda-forge
lz4-c 1.9.4 h4a02239_0
markdown 3.4.4 pyhd8ed1ab_0 conda-forge
markupsafe 2.1.5 py311h32d8acf_0 conda-forge
matplotlib 3.8.0 py311h6ffa863_0
matplotlib-base 3.8.0 py311h52e1fcc_0
menuinst 2.1.1 py311h1af927a_0 conda-forge
mpc 1.3.1 heaf1863_0 conda-forge
mpfr 4.2.1 haad2271_1 conda-forge
mpmath 1.3.0 pyhd8ed1ab_0 conda-forge
multidict 6.0.2 py311hf118e41_0
multiprocess 0.70.14 py311h6ffa863_0
munkres 1.1.4 py_0
mypy_extensions 1.0.0 pyha770c72_0 conda-forge
nccl 2.18.3 cuda11.8_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
ncurses 6.4 h4a02239_0
nest-asyncio 1.6.0 pyhd8ed1ab_0 conda-forge
networkx 2.8.8 pyhd8ed1ab_0 conda-forge
nomkl 3.0 0 https://ftp.osuosl.org/pub/open-ce/1.10.0
numactl 2.0.16 hba61f60_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
numexpr 2.8.7 py311hc46fc55_0
numpy 1.24.3 py311h148a09e_0
numpy-base 1.24.3 py311h06b82f6_0
oauthlib 3.2.2 pyhd8ed1ab_0 conda-forge
openjpeg 2.4.0 hfe35807_0
openssl 3.3.1 h1f2b957_0 conda-forge
orc 1.8.2 h341c9a4_2 conda-forge
packaging 23.1 py311h6ffa863_0
pandas 2.1.1 py311h52e1fcc_0
pcre2 10.42 h280155c_0
pillow 10.0.1 py311he33076b_0
pip 23.3 py311h6ffa863_0
platformdirs 4.2.2 pyhd8ed1ab_0 conda-forge
pluggy 1.0.0 py311h6ffa863_1
pooch 1.8.2 pyhd8ed1ab_0 conda-forge
protobuf 4.21.12 py311ha7baec7_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
psutil 5.9.8 py311hd26027c_0 conda-forge
pyarrow 11.0.0 py311h04a18d5_1
pyasn1 0.6.0 pyhd8ed1ab_0 conda-forge
pyasn1-modules 0.4.0 pyhd8ed1ab_0 conda-forge
pybind11-abi 4 hd3eb1b0_1
pycosat 0.6.6 py311hf118e41_0
pycparser 2.21 pyhd3eb1b0_0
pyjwt 2.8.0 pyhd8ed1ab_1 conda-forge
pyopenssl 23.2.0 py311h6ffa863_0
pyparsing 3.0.9 py311h6ffa863_0
pyre-extensions 0.0.30 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 py311h6ffa863_0
python 3.11.8 h3332dee_0_cpython conda-forge
python-dateutil 2.8.2 pyhd3eb1b0_0
python-tzdata 2023.3 pyhd3eb1b0_0
python-xxhash 2.0.2 py311hf118e41_1
python_abi 3.11 4_cp311 conda-forge
pytorch 2.0.1 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
pytorch-base 2.0.1 cuda11.8_py311_pb4.21.12_4 https://ftp.osuosl.org/pub/open-ce/1.10.0
pytz 2023.3.post1 py311h6ffa863_0
pyu2f 0.1.5 pyhd8ed1ab_0 conda-forge
pyyaml 6.0.1 py311hf118e41_0
re2 2023.02.01 h883269e_0 conda-forge
readline 8.2 hf118e41_0
regex 2023.10.3 py311hf118e41_0
reproc 14.2.4 h29c3540_1
reproc-cpp 14.2.4 h29c3540_1
requests 2.31.0 py311h6ffa863_0
requests-oauthlib 2.0.0 pyhd8ed1ab_0 conda-forge
responses 0.13.3 pyhd3eb1b0_0
rsa 4.9 pyhd8ed1ab_0 conda-forge
ruamel.yaml 0.17.21 py311hf118e41_0
s2n 1.3.37 h5e47323_0 conda-forge
safetensors 0.4.0 py311hda16d9e_0
scipy 1.11.1 py311hd69e9bb_0 https://ftp.osuosl.org/pub/open-ce/1.10.0
sentencepiece 0.1.97 h1e74c73_py311_pb4.21.12_2 https://ftp.osuosl.org/pub/open-ce/1.10.0
setuptools 68.0.0 py311h6ffa863_0
six 1.16.0 pyhd3eb1b0_1
snappy 1.1.9 h29c3540_0
sqlite 3.41.2 hf118e41_0
sympy 1.12.1 pypyh2585a3b_103 conda-forge
tabulate 0.8.10 pyhd8ed1ab_0 conda-forge
tensorboard 2.13.0 pyhab0730d_pb4.21.12_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
tensorboard-data-server 0.7.0 pyh6f84499_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
tensorboard-plugin-wit 1.6.0 pyh9f0ad1d_0 conda-forge
tk 8.6.13 hd4bbf49_0 conda-forge
tokenizers 0.13.3 py311h3d4f45a_0
torchdata 0.6.0 py311_2 https://ftp.osuosl.org/pub/open-ce/1.10.0
torchsnapshot 0.1.0 pyhd8ed1ab_0 conda-forge
torchtext-base 0.15.2 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
torchtnt 0.2.4 pyhd8ed1ab_0 conda-forge
torchvision-base 0.15.2 cuda11.8_py311_1 https://ftp.osuosl.org/pub/open-ce/1.10.0
tornado 6.3.3 py311hf118e41_0
tqdm 4.65.0 py311h7837921_0
transformers 4.32.1 py311h6ffa863_0
truststore 0.8.0 py311h6ffa863_0
typing-extensions 4.7.1 py311h6ffa863_0
typing_extensions 4.7.1 py311h6ffa863_0
typing_inspect 0.9.0 pyhd8ed1ab_0 conda-forge
tzdata 2023c h04d1e81_0
urllib3 1.26.18 py311h6ffa863_0
utf8proc 2.6.1 h140841e_0
werkzeug 2.3.8 pyhd8ed1ab_0 conda-forge
wheel 0.41.2 py311h6ffa863_0
xxhash 0.8.0 h140841e_3
xz 5.4.2 hf118e41_0
yaml 0.2.5 h7b6447c_0
yaml-cpp 0.8.0 h4a02239_0
yarl 1.8.1 py311hf118e41_0
zipp 3.11.0 py311h6ffa863_0
zlib 1.2.13 h1f2b957_6 conda-forge
zstandard 0.19.0 py311hf118e41_0
zstd 1.5.5 h57e4825_0
Hi ! can you try updating `datasets` and `huggingface_hub` ?
```
pip install -U datasets huggingface_hub
``` | [
-0.7595154047012329,
-0.10635515302419662,
-0.07152166217565536,
0.05183333158493042,
0.46253055334091187,
-0.08935867249965668,
0.3620522618293762,
-0.005497589707374573,
0.16050976514816284,
-0.06045207381248474,
0.04542510211467743,
-0.0038897281046956778,
0.20361945033073425,
0.1351694... |
https://github.com/huggingface/datasets/issues/6985 | AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' | "Please note that the error is raised just at import:\r\n```python\r\nimport pyarrow.parquet as pq\r(...TRUNCATED) | "### Describe the bug\n\nI have been struggling with this for two days, any help would be appreciate(...TRUNCATED) | 121 | "AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' \n ### Describe the bug\n\nI h(...TRUNCATED) | [-0.18593543767929077,-0.4933111071586609,-0.03907205909490585,0.42739152908325195,0.484800398349761(...TRUNCATED) |
https://github.com/huggingface/datasets/issues/6985 | AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' | "It is not a problem with the `datasets` library: we support latest version of `pyarrow` and our Con(...TRUNCATED) | "### Describe the bug\n\nI have been struggling with this for two days, any help would be appreciate(...TRUNCATED) | 211 | "AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' \n ### Describe the bug\n\nI h(...TRUNCATED) | [-0.18593543767929077,-0.4933111071586609,-0.03907205909490585,0.42739152908325195,0.484800398349761(...TRUNCATED) |
https://github.com/huggingface/datasets/issues/6984 | Convert polars DataFrame back to datasets | "Hi ! Thanks for reporting :)\r\n\r\nWe don't support `large_list` yet, though it should be added to(...TRUNCATED) | "### Feature request\n\nThis returns error.\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndsdf(...TRUNCATED) | 25 | "Convert polars DataFrame back to datasets \n ### Feature request\n\nThis returns error.\r\n```pytho(...TRUNCATED) | [-0.29300931096076965,-0.4047183096408844,-0.11217483878135681,0.10133212804794312,0.460617959499359(...TRUNCATED) |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 3