id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
2,127,020,042
6,655
Cannot load the dataset go_emotions
open
[ "Thanks for reporting, @arame.\r\n\r\nI guess you have an old version of `transformers` (that submodule is present in `transformers` since version 3.0.1, since nearly 4 years ago). If you update it, the error should disappear:\r\n```shell\r\npip install -U transformers\r\n```\r\n\r\nOn the other hand, I am wonderin...
2024-02-09T12:15:39
2024-02-12T09:35:55
null
### Describe the bug When I run the following code I get an exception; `go_emotions = load_dataset("go_emotions")` > AttributeError Traceback (most recent call last) Cell In[6], [line 1](vscode-notebook-cell:?execution_count=6&line=1) ----> [1](vscode-notebook-cell:?execution_count=6&l...
arame
https://github.com/huggingface/datasets/issues/6655
null
false
2,126,939,358
6,654
Batched dataset map throws exception that cannot cast fixed length array to Sequence
closed
[ "Hi ! This issue has been fixed by https://github.com/huggingface/datasets/pull/6283\r\n\r\nCan you try again with the new release 2.17.0 ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\n", "Amazing! It's indeed fixed now. Thanks!" ]
2024-02-09T11:23:19
2024-02-12T08:26:53
2024-02-12T08:26:53
### Describe the bug I encountered a TypeError when batch processing a dataset with Sequence features in datasets package version 2.16.1. The error arises from a mismatch in handling fixed-size list arrays during the map function execution. Debugging pinpoints the issue to an if-statement in datasets/table.py, line 20...
keesjandevries
https://github.com/huggingface/datasets/issues/6654
null
false
2,126,831,929
6,653
Set dev version
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6653). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-02-09T10:12:02
2024-02-09T10:18:20
2024-02-09T10:12:12
null
albertvillanova
https://github.com/huggingface/datasets/pull/6653
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6653", "html_url": "https://github.com/huggingface/datasets/pull/6653", "diff_url": "https://github.com/huggingface/datasets/pull/6653.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6653.patch", "merged_at": "2024-02-09T10:12...
true
2,126,760,798
6,652
Release: 2.17.0
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6652). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-02-09T09:25:01
2024-02-09T10:11:48
2024-02-09T10:05:35
null
albertvillanova
https://github.com/huggingface/datasets/pull/6652
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6652", "html_url": "https://github.com/huggingface/datasets/pull/6652", "diff_url": "https://github.com/huggingface/datasets/pull/6652.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6652.patch", "merged_at": "2024-02-09T10:05...
true
2,126,649,626
6,651
Slice splits support for datasets.load_from_disk
open
[]
2024-02-09T08:00:21
2024-06-14T14:42:46
null
### Feature request Support for slice splits in `datasets.load_from_disk`, similar to how it's already supported for `datasets.load_dataset`. ### Motivation Slice splits are convienient in a numer of cases - adding support to `datasets.load_from_disk` would make working with local datasets easier and homogeniz...
mhorlacher
https://github.com/huggingface/datasets/issues/6651
null
false
2,125,680,991
6,650
AttributeError: 'InMemoryTable' object has no attribute '_batches'
open
[ "Hi! Does running the following code also return the same error on your machine? \r\n\r\n```python\r\nimport copy\r\nimport pyarrow as pa\r\nfrom datasets.table import InMemoryTable\r\n\r\ncopy.deepcopy(InMemoryTable(pa.table({\"a\": [1, 2, 3], \"b\": [\"foo\", \"bar\", \"foobar\"]})))\r\n```", "No, it doesn't, ...
2024-02-08T17:11:26
2024-02-21T00:34:41
null
### Describe the bug ``` Traceback (most recent call last): File "finetune.py", line 103, in <module> main(args) File "finetune.py", line 45, in main data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer, File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict....
matsuobasho
https://github.com/huggingface/datasets/issues/6650
null
false
2,124,940,213
6,649
Minor multi gpu doc improvement
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6649). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-02-08T11:17:24
2024-02-08T11:23:35
2024-02-08T11:17:35
just added torch.no_grad and eval()
lhoestq
https://github.com/huggingface/datasets/pull/6649
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6649", "html_url": "https://github.com/huggingface/datasets/pull/6649", "diff_url": "https://github.com/huggingface/datasets/pull/6649.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6649.patch", "merged_at": "2024-02-08T11:17...
true
2,124,813,589
6,648
Document usage of hfh cli instead of git
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6648). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-02-08T10:24:56
2024-02-08T13:57:41
2024-02-08T13:51:39
(basically the same content as the hfh upload docs, but adapted for datasets)
lhoestq
https://github.com/huggingface/datasets/pull/6648
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6648", "html_url": "https://github.com/huggingface/datasets/pull/6648", "diff_url": "https://github.com/huggingface/datasets/pull/6648.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6648.patch", "merged_at": "2024-02-08T13:51...
true
2,123,397,569
6,647
Update loading.mdx to include "jsonl" file loading.
open
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6647). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> Thanks for adding the explicit loading command.\r\n> \r\n> However, I would move it j...
2024-02-07T16:18:08
2024-02-08T15:34:17
null
* A small update to the documentation, noting the ability to load jsonl files.
mosheber
https://github.com/huggingface/datasets/pull/6647
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6647", "html_url": "https://github.com/huggingface/datasets/pull/6647", "diff_url": "https://github.com/huggingface/datasets/pull/6647.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6647.patch", "merged_at": null }
true
2,123,134,128
6,646
Better multi-gpu example
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6646). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-02-07T14:15:01
2024-02-09T17:43:32
2024-02-07T14:59:11
Use Qwen1.5-0.5B-Chat as an easy example for multi-GPU the previous example was using a model for translation and the way it was setup was not really the right way to use the model.
lhoestq
https://github.com/huggingface/datasets/pull/6646
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6646", "html_url": "https://github.com/huggingface/datasets/pull/6646", "diff_url": "https://github.com/huggingface/datasets/pull/6646.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6646.patch", "merged_at": "2024-02-07T14:59...
true
2,122,956,818
6,645
Support fsspec 2024.2
closed
[ "I'd be very grateful. This upper bound banished me straight into dependency hell today. :(" ]
2024-02-07T12:45:29
2024-02-29T15:12:19
2024-02-29T15:12:19
Support fsspec 2024.2. First, we should address: - #6644
albertvillanova
https://github.com/huggingface/datasets/issues/6645
null
false
2,122,955,282
6,644
Support fsspec 2023.12
closed
[ "The pinned fsspec version range dependency conflict has been affecting several of our users in https://github.com/iterative/dvc. I've opened an initial PR that I think should resolve the glob behavior changes with using datasets + the latest fsspec release.\r\n\r\nPlease let us know if there's any other fsspec rel...
2024-02-07T12:44:39
2024-02-29T15:12:18
2024-02-29T15:12:18
Support fsspec 2023.12 by handling previous and new glob behavior.
albertvillanova
https://github.com/huggingface/datasets/issues/6644
null
false
2,121,239,039
6,643
Faiss GPU index cannot be serialised when passed to trainer
open
[ "Hi ! make sure your query embeddings are numpy arrays, not torch tensors ;)", "Hi Quentin, not sure how that solves the problem number 1. I am trying to pass on a dataset with a faiss gpu for training to the standard trainer but getting this serialisation error. What is a workaround this? I do not want to remove...
2024-02-06T16:41:00
2024-02-15T10:29:32
null
### Describe the bug I am working on a retrieval project and encountering I have encountered two issues in the hugging face faiss integration: 1. I am trying to pass in a dataset with a faiss index to the Huggingface trainer. The code works for a cpu faiss index, but doesn't for a gpu one, getting error: ``` ...
rubenweitzman
https://github.com/huggingface/datasets/issues/6643
null
false
2,119,085,766
6,642
Differently dataset object saved than it is loaded.
closed
[ "I see now, that I have to use `load_from_disk`, in order to load dataset properly, not `load_dataset`. Why is this behavior split? Why do we need both, `load_dataset` and `load_from_disk`?\r\n\r\nUnless answered, I believe this might be helpful for other hf datasets newbies.\r\n\r\nAnyway, made a `load_dataset` co...
2024-02-05T17:28:57
2024-02-06T09:50:19
2024-02-06T09:50:19
### Describe the bug Differently sized object is saved than it is loaded. ### Steps to reproduce the bug Hi, I save dataset in a following way: ``` dataset = load_dataset("json", data_files={ "train": os.path.join(input_folder, f"{task_met...
MFajcik
https://github.com/huggingface/datasets/issues/6642
null
false
2,116,963,132
6,641
unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
closed
[ "Hi @Hughhuh. \r\n\r\nI have formatted the issue because it was not easily readable. Additionally, the environment info is incomplete: it seems you did not run the proposed CLI command `datasets-cli env` and essential information is missing: version of `datasets`, version of `pyarrow`,...\r\n\r\nWith the informatio...
2024-02-04T08:49:31
2024-02-06T09:26:07
2024-02-06T09:11:45
### Describe the bug unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte ### Steps to reproduce the bug ``` import sys sys.getdefaultencoding() 'utf-8' from datasets import load_dataset print(f"Train dataset size: {len(dataset['train'])}") print(f"Test datase...
Hughhuh
https://github.com/huggingface/datasets/issues/6641
null
false
2,115,864,531
6,640
Sign Language Support
open
[]
2024-02-02T21:54:51
2024-02-02T21:54:51
null
### Feature request Currently, there are only several Sign Language labels, I would like to propose adding all the Signed Languages as new labels which are described in this ISO standard: https://www.evertype.com/standards/iso639/sign-language.html ### Motivation Datasets currently only have labels for several signe...
Merterm
https://github.com/huggingface/datasets/issues/6640
null
false
2,114,620,200
6,639
Run download_and_prepare if missing splits
open
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6639). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-02-02T10:36:49
2024-02-06T16:54:22
null
A first step towards https://github.com/huggingface/datasets/issues/6529
lhoestq
https://github.com/huggingface/datasets/pull/6639
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6639", "html_url": "https://github.com/huggingface/datasets/pull/6639", "diff_url": "https://github.com/huggingface/datasets/pull/6639.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6639.patch", "merged_at": null }
true
2,113,329,257
6,638
Cannot download wmt16 dataset
closed
[ "Looks like it works with latest datasets repository\r\n```\r\n- `datasets` version: 2.16.2.dev0\r\n- Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- `huggingface_hub` version: 0.20.3\r\n- PyArrow version: 15.0.0\r\n- Pandas version: 2.0.1\r\n- `fsspec` version: 2023.10.0\r\...
2024-02-01T19:41:42
2024-02-01T20:07:29
2024-02-01T20:07:29
### Describe the bug As of this morning (PST) 2/1/2024, seeing the wmt16 dataset is missing from opus , could you suggest an alternative? ``` Downloading data files: 0%| | 0/4 [00:00<?, ?it/s]Tra...
vidyasiv
https://github.com/huggingface/datasets/issues/6638
null
false
2,113,025,975
6,637
'with_format' is extremely slow when used together with 'interleave_datasets' or 'shuffle' on IterableDatasets
closed
[ "The \"torch\" formatting is usually fast because we do zero-copy conversion from the Arrow data on your disk to Torch tensors. However IterableDataset shuffling seems to do data copies that slow down the pipeline, and it shuffles python objects instead of Arrow data.\r\n\r\nTo fix this we need to implement `Buffer...
2024-02-01T17:16:54
2025-09-18T16:37:11
2025-09-18T16:37:11
### Describe the bug If you: 1. Interleave two iterable datasets together with the interleave_datasets function, or shuffle an iterable dataset 2. Set the output format to torch tensors with .with_format('torch') Then iterating through the dataset becomes over 100x slower than it is if you don't apply the torch...
tobycrisford
https://github.com/huggingface/datasets/issues/6637
null
false
2,110,781,097
6,636
Faster column validation and reordering
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6636). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thanks @mariosasko, I made the changes. However, I did some tests with `map` and I stil...
2024-01-31T19:08:28
2024-02-07T19:39:00
2024-02-06T23:03:38
I work with bioinformatics data and often these tables have thousands and even tens of thousands of features. These tables are also accompanied by metadata that I do not want to pass in the model. When I perform `set_format('pt', columns=large_column_list)` , it can take several minutes before it finishes. The culprit ...
psmyth94
https://github.com/huggingface/datasets/pull/6636
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6636", "html_url": "https://github.com/huggingface/datasets/pull/6636", "diff_url": "https://github.com/huggingface/datasets/pull/6636.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6636.patch", "merged_at": "2024-02-06T23:03...
true
2,110,659,519
6,635
Fix missing info when loading some datasets from Parquet export
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6635). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-01-31T17:55:21
2024-02-07T16:48:55
2024-02-07T16:41:04
Fix getting the info for script-based datasets with Parquet export with a single config not named "default". E.g. ```python from datasets import load_dataset_builder b = load_dataset_builder("bookcorpus") print(b.info.features) # should print {'text': Value(dtype='string', id=None)} ``` I fixed this by ...
lhoestq
https://github.com/huggingface/datasets/pull/6635
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6635", "html_url": "https://github.com/huggingface/datasets/pull/6635", "diff_url": "https://github.com/huggingface/datasets/pull/6635.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6635.patch", "merged_at": "2024-02-07T16:41...
true
2,110,242,376
6,634
Support data_dir parameter in push_to_hub
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6634). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@huggingface/datasets, feel free to review this PR so that it can be included in the ne...
2024-01-31T14:37:36
2024-02-05T10:32:49
2024-02-05T10:26:40
Support `data_dir` parameter in `push_to_hub`. This allows users to organize the data files according to their specific needs. For example, "wikimedia/wikipedia" files could be organized by year and/or date, e.g. "2024/20240101/20240101.en".
albertvillanova
https://github.com/huggingface/datasets/pull/6634
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6634", "html_url": "https://github.com/huggingface/datasets/pull/6634", "diff_url": "https://github.com/huggingface/datasets/pull/6634.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6634.patch", "merged_at": "2024-02-05T10:26...
true
2,110,124,475
6,633
dataset viewer requires no-script
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6633). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-01-31T13:41:54
2024-01-31T14:05:04
2024-01-31T13:59:01
null
severo
https://github.com/huggingface/datasets/pull/6633
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6633", "html_url": "https://github.com/huggingface/datasets/pull/6633", "diff_url": "https://github.com/huggingface/datasets/pull/6633.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6633.patch", "merged_at": "2024-01-31T13:59...
true
2,108,541,678
6,632
Fix reload cache with data dir
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6632). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-01-30T18:52:23
2024-02-06T17:27:35
2024-02-06T17:21:24
The cache used to only check for the latest cache directory with a given config_name, but it was wrong (e.g. `default-data_dir=data%2Ffortran-data_dir=data%2Ffortran` instead of `default-data_dir=data%2Ffortran`) I fixed this by not passing the `config_kwargs` to the parent Builder `__init__`, and passing the config...
lhoestq
https://github.com/huggingface/datasets/pull/6632
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6632", "html_url": "https://github.com/huggingface/datasets/pull/6632", "diff_url": "https://github.com/huggingface/datasets/pull/6632.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6632.patch", "merged_at": "2024-02-06T17:21...
true
2,107,802,473
6,631
Fix filelock: use current umask for filelock >= 3.10
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6631). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-01-30T12:56:01
2024-01-30T15:34:49
2024-01-30T15:28:37
reported in https://github.com/huggingface/evaluate/issues/542 cc @stas00 @williamberrios close https://github.com/huggingface/datasets/issues/6589
lhoestq
https://github.com/huggingface/datasets/pull/6631
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6631", "html_url": "https://github.com/huggingface/datasets/pull/6631", "diff_url": "https://github.com/huggingface/datasets/pull/6631.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6631.patch", "merged_at": "2024-01-30T15:28...
true
2,106,478,275
6,630
Bump max range of dill to 0.3.8
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6630). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hmm these errors look pretty weird... can they be retried?", "Hi, thanks for working ...
2024-01-29T21:35:55
2024-01-30T16:19:45
2024-01-30T15:12:25
Release on Jan 27, 2024: https://pypi.org/project/dill/0.3.8/#history
ringohoffman
https://github.com/huggingface/datasets/pull/6630
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6630", "html_url": "https://github.com/huggingface/datasets/pull/6630", "diff_url": "https://github.com/huggingface/datasets/pull/6630.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6630.patch", "merged_at": "2024-01-30T15:12...
true
2,105,774,482
6,629
Support push_to_hub without org/user to default to logged-in user
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6629). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@huggingface/datasets, feel free to review this PR so that it can be included in the ne...
2024-01-29T15:36:52
2024-02-05T12:35:43
2024-02-05T12:29:36
This behavior is aligned with: - the behavior of `datasets` before merging #6519 - the behavior described in the corresponding docstring - the behavior of `huggingface_hub.create_repo` Revert "Support push_to_hub canonical datasets (#6519)" - This reverts commit a887ee78835573f5d80f9e414e8443b4caff3541. Fix...
albertvillanova
https://github.com/huggingface/datasets/pull/6629
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6629", "html_url": "https://github.com/huggingface/datasets/pull/6629", "diff_url": "https://github.com/huggingface/datasets/pull/6629.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6629.patch", "merged_at": "2024-02-05T12:29...
true
2,105,760,502
6,628
Make CLI test support multi-processing
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6628). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@huggingface/datasets, feel free to review this PR so that it can be included in the ne...
2024-01-29T15:30:09
2024-02-05T10:29:20
2024-02-05T10:23:13
Support passing `--num_proc` to CLI test. This was really useful recently to run the command on `pubmed`: https://huggingface.co/datasets/pubmed/discussions/11
albertvillanova
https://github.com/huggingface/datasets/pull/6628
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6628", "html_url": "https://github.com/huggingface/datasets/pull/6628", "diff_url": "https://github.com/huggingface/datasets/pull/6628.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6628.patch", "merged_at": "2024-02-05T10:23...
true
2,105,735,816
6,627
Disable `tqdm` bars in non-interactive environments
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6627). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-01-29T15:18:21
2024-01-29T15:47:34
2024-01-29T15:41:32
Replace `disable=False` with `disable=None` in the `tqdm` bars to disable them in non-interactive environments (by default). For more info, see a [similar PR](https://github.com/huggingface/huggingface_hub/pull/2000) in `huggingface_hub`.
mariosasko
https://github.com/huggingface/datasets/pull/6627
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6627", "html_url": "https://github.com/huggingface/datasets/pull/6627", "diff_url": "https://github.com/huggingface/datasets/pull/6627.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6627.patch", "merged_at": "2024-01-29T15:41...
true
2,105,482,522
6,626
Raise error on bad split name
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6626). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-01-29T13:17:41
2024-01-29T15:18:25
2024-01-29T15:12:18
e.g. dashes '-' are not allowed in split names This should add an error message on datasets with unsupported split names like https://huggingface.co/datasets/open-source-metrics/test cc @AndreaFrancis
lhoestq
https://github.com/huggingface/datasets/pull/6626
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6626", "html_url": "https://github.com/huggingface/datasets/pull/6626", "diff_url": "https://github.com/huggingface/datasets/pull/6626.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6626.patch", "merged_at": "2024-01-29T15:12...
true
2,103,950,718
6,624
How to download the laion-coco dataset
closed
[ "Hi, this dataset has been disabled by the authors, so unfortunately it's no longer possible to download it." ]
2024-01-28T03:56:05
2024-02-06T09:43:31
2024-02-06T09:43:31
The laion coco dataset is not available now. How to download it https://huggingface.co/datasets/laion/laion-coco
vanpersie32
https://github.com/huggingface/datasets/issues/6624
null
false
2,103,870,123
6,623
streaming datasets doesn't work properly with multi-node
open
[ "@mariosasko, @lhoestq, @albertvillanova\r\nhey guys! can anyone help? or can you guys suggest who can help with this?", "Hi ! \r\n\r\n1. When the dataset is running of of examples, the last batches received by the GPU can be incomplete or empty/missing. We haven't implemented yet a way to ignore the last batch. ...
2024-01-27T23:46:13
2024-10-16T00:55:19
null
### Feature request Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it. Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt...
rohitgr7
https://github.com/huggingface/datasets/issues/6623
null
false
2,103,780,697
6,622
multi-GPU map does not work
closed
[ "This should now be fixed by https://github.com/huggingface/datasets/pull/6550 and updated with https://github.com/huggingface/datasets/pull/6646\r\n\r\nFeel free to re-open if you're still having issues :)" ]
2024-01-27T20:06:08
2024-02-08T11:18:21
2024-02-08T11:18:21
### Describe the bug Here is the code for single-GPU processing: https://pastebin.com/bfmEeK2y Here is the code for multi-GPU processing: https://pastebin.com/gQ7i5AQy Here is the video showing that the multi-GPU mapping does not work as expected (there are so many things wrong here, it's better to watch the 3-min...
kopyl
https://github.com/huggingface/datasets/issues/6622
null
false
2,103,675,294
6,621
deleted
closed
[]
2024-01-27T16:59:58
2024-01-27T17:14:43
2024-01-27T17:14:43
...
kopyl
https://github.com/huggingface/datasets/issues/6621
null
false
2,103,110,536
6,620
wiki_dpr.py error (ID mismatch between lines {id} and vector {vec_id}
closed
[ "Thanks for reporting, @kiehls90.\r\n\r\nAs this seems an issue with the specific \"wiki_dpr\" dataset, I am transferring the issue to the corresponding dataset page: https://huggingface.co/datasets/wiki_dpr/discussions/13" ]
2024-01-27T01:00:09
2024-02-06T09:40:19
2024-02-06T09:40:19
### Describe the bug I'm trying to run a rag example, and the dataset is wiki_dpr. wiki_dpr download and extracting have been completed successfully. However, at the generating train split stage, an error from wiki_dpr.py keeps popping up. Especially in "_generate_examples" : 1. The following error occurs in the...
kiehls90
https://github.com/huggingface/datasets/issues/6620
null
false
2,102,407,478
6,619
Migrate from `setup.cfg` to `pyproject.toml`
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6619). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-01-26T15:27:10
2024-01-26T15:53:40
2024-01-26T15:47:32
Based on https://github.com/huggingface/huggingface_hub/pull/1971 in `hfh`
mariosasko
https://github.com/huggingface/datasets/pull/6619
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6619", "html_url": "https://github.com/huggingface/datasets/pull/6619", "diff_url": "https://github.com/huggingface/datasets/pull/6619.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6619.patch", "merged_at": "2024-01-26T15:47...
true
2,101,868,198
6,618
While importing load_dataset from datasets
closed
[ "Hi! Can you please share the error's stack trace so we can see where it comes from?", "We cannot reproduce the issue and we do not have enough information: environment info (need to run `datasets-cli env`), stack trace,...\r\n\r\nI am closing the issue. Feel free to reopen it (with additional information) if the...
2024-01-26T09:21:57
2024-07-23T09:31:07
2024-02-06T09:25:54
### Describe the bug cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' this is the error i received ### Steps to reproduce the bug from datasets import load_dataset ### Expected behavior No errors ### Environment info python 3.11.5
suprith-hub
https://github.com/huggingface/datasets/issues/6618
null
false
2,100,459,449
6,617
Fix CI: pyarrow 15, pandas 2.2 and sqlachemy
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6617). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-01-25T13:57:41
2024-01-26T14:56:46
2024-01-26T14:50:44
this should fix the CI failures on `main` close https://github.com/huggingface/datasets/issues/5477
lhoestq
https://github.com/huggingface/datasets/pull/6617
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6617", "html_url": "https://github.com/huggingface/datasets/pull/6617", "diff_url": "https://github.com/huggingface/datasets/pull/6617.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6617.patch", "merged_at": "2024-01-26T14:50...
true
2,100,125,709
6,616
Use schema metadata only if it matches features
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6616). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-01-25T11:01:14
2024-01-26T16:25:24
2024-01-26T16:19:12
e.g. if we use `map` in arrow format and transform the table, the returned table might have new columns but the metadata might be wrong
lhoestq
https://github.com/huggingface/datasets/pull/6616
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6616", "html_url": "https://github.com/huggingface/datasets/pull/6616", "diff_url": "https://github.com/huggingface/datasets/pull/6616.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6616.patch", "merged_at": "2024-01-26T16:19...
true
2,098,951,409
6,615
...
closed
[ "Sorry I posted in the wrong repo, please delete.. thanks!" ]
2024-01-24T19:37:03
2024-01-24T19:42:30
2024-01-24T19:40:11
...
ftkeys
https://github.com/huggingface/datasets/issues/6615
null
false
2,098,884,520
6,614
`datasets/downloads` cleanup tool
open
[]
2024-01-24T18:52:10
2024-01-24T18:55:09
null
### Feature request Splitting off https://github.com/huggingface/huggingface_hub/issues/1997 - currently `huggingface-cli delete-cache` doesn't take care of cleaning `datasets` temp files e.g. I discovered having millions of files under `datasets/downloads` cache, I had to do: ``` sudo find /data/huggingface/...
stas00
https://github.com/huggingface/datasets/issues/6614
null
false
2,098,078,210
6,612
cnn_dailymail repeats itself
closed
[ "Hi ! We recently updated `cnn_dailymail` and now `datasets>=2.14` is needed to load it.\r\n\r\nYou can update `datasets` with\r\n\r\n```\r\npip install -U datasets\r\n```" ]
2024-01-24T11:38:25
2024-02-01T08:14:50
2024-02-01T08:14:50
### Describe the bug When I try to load `cnn_dailymail` dataset, it takes longer than usual and when I checked the dataset it's 3x bigger than it's supposed to be. Check https://huggingface.co/datasets/cnn_dailymail: it says 287k rows for train. But when I check length of train split it says 861339. Also I che...
KeremZaman
https://github.com/huggingface/datasets/issues/6612
null
false
2,096,004,858
6,611
`load_from_disk` with large dataset from S3 runs into `botocore.exceptions.ClientError`
open
[]
2024-01-23T12:37:57
2024-01-23T12:37:57
null
### Describe the bug When loading a large dataset (>1000GB) from S3 I run into the following error: ``` Traceback (most recent call last): File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 113, in _error_wrapper return await func(*args, **kwargs) File "/home/alp/.local/lib/python3....
zotroneneis
https://github.com/huggingface/datasets/issues/6611
null
false
2,095,643,711
6,610
cast_column to Sequence(subfeatures_dict) has err
closed
[ "Hi! You are passing the wrong feature type to `cast_column`. This is the fixed call:\r\n```python\r\nais_dataset = ais_dataset.cast_column(\"my_labeled_bbox\", {\"bbox\": Sequence(Value(dtype=\"int64\")), \"label\": ClassLabel(names=[\"cat\", \"dog\"])})\r\n```", "> Hi! You are passing the wrong feature type to ...
2024-01-23T09:32:32
2024-01-25T02:15:23
2024-01-25T02:15:23
### Describe the bug I am working with the following demo code: ``` from datasets import load_dataset from datasets.features import Sequence, Value, ClassLabel, Features ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/") ais_dataset = ais_dataset["train"] def add_class(example): ...
neiblegy
https://github.com/huggingface/datasets/issues/6610
null
false
2,095,085,650
6,609
Wrong path for cache directory in offline mode
closed
[ "+1", "same error in 2.16.1", "@kongjiellx any luck with the issue?", "I opened https://github.com/huggingface/datasets/pull/6632 to fix this issue. Once it's merged we'll do a new release of `datasets`", "Thanks @lhoestq !" ]
2024-01-23T01:47:19
2024-02-06T17:21:25
2024-02-06T17:21:25
### Describe the bug Dear huggingfacers, I'm trying to use a subset of the-stack dataset. When I run the command the first time ``` dataset = load_dataset( path='bigcode/the-stack', data_dir='data/fortran', split='train' ) ``` It downloads the files and caches them normally. Nevertheless, ...
je-santos
https://github.com/huggingface/datasets/issues/6609
null
false
2,094,153,292
6,608
Add `with_rank` param to `Dataset.filter`
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6608). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-01-22T15:19:16
2024-01-29T16:43:11
2024-01-29T16:36:53
Fix #6564
mariosasko
https://github.com/huggingface/datasets/pull/6608
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6608", "html_url": "https://github.com/huggingface/datasets/pull/6608", "diff_url": "https://github.com/huggingface/datasets/pull/6608.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6608.patch", "merged_at": "2024-01-29T16:36...
true
2,091,766,063
6,607
Update features.py to avoid bfloat16 unsupported error
closed
[ "I think not all torch tensors should be converted to float, what if it's a tensor of integers for example ?\r\nMaybe you can check for the tensor dtype before converting", "@lhoestq Please could this be merged? 🙏", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show up...
2024-01-20T00:39:44
2024-05-17T09:46:29
2024-05-17T09:40:13
Fixes https://github.com/huggingface/datasets/issues/6566 Let me know if there's any tests I need to clear.
skaulintel
https://github.com/huggingface/datasets/pull/6607
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6607", "html_url": "https://github.com/huggingface/datasets/pull/6607", "diff_url": "https://github.com/huggingface/datasets/pull/6607.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6607.patch", "merged_at": "2024-05-17T09:40...
true
2,091,088,785
6,606
Dedicated RNG object for fingerprinting
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6606). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-01-19T18:34:47
2024-01-26T15:11:38
2024-01-26T15:05:34
Closes https://github.com/huggingface/datasets/issues/6604, closes https://github.com/huggingface/datasets/issues/2775
mariosasko
https://github.com/huggingface/datasets/pull/6606
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6606", "html_url": "https://github.com/huggingface/datasets/pull/6606", "diff_url": "https://github.com/huggingface/datasets/pull/6606.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6606.patch", "merged_at": "2024-01-26T15:05...
true
2,090,188,376
6,605
ELI5 no longer available, but referenced in example code
closed
[ "Addressed in https://github.com/huggingface/transformers/pull/28715." ]
2024-01-19T10:21:52
2024-02-01T17:58:23
2024-02-01T17:58:22
Here, an example code is given: https://huggingface.co/docs/transformers/tasks/language_modeling This code + article references the ELI5 dataset. ELI5 is no longer available, as the ELI5 dataset page states: https://huggingface.co/datasets/eli5 "Defunct: Dataset "eli5" is defunct and no longer accessible due to u...
drdsgvo
https://github.com/huggingface/datasets/issues/6605
null
false
2,089,713,945
6,604
Transform fingerprint collisions due to setting fixed random seed
closed
[ "I've opened a PR with a fix.", "I don't think the PR fixes the root cause, since it still relies on the `random` library which will often have its seed fixed. I think the builtin `uuid.uuid4()` is a better choice: https://docs.python.org/3/library/uuid.html" ]
2024-01-19T06:32:25
2024-01-26T15:05:35
2024-01-26T15:05:35
### Describe the bug The transform fingerprinting logic relies on the `random` library for random bits when the function is not hashable (e.g. bound methods as used in `trl`: https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L356). This causes collisions when the training code sets a fixed random...
normster
https://github.com/huggingface/datasets/issues/6604
null
false
2,089,230,766
6,603
datasets map `cache_file_name` does not work
open
[ "Unfortunately, I'm unable to reproduce this error. Can you share the reproducer?", "```\r\nds = datasets.Dataset.from_dict(dict(a=[i for i in range(100)]))\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_name=\"/tmp/whatever-fn\") # this worked\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_na...
2024-01-18T23:08:30
2024-01-28T04:01:15
null
### Describe the bug In the documentation `datasets.Dataset.map` arg `cache_file_name` is said to be a string, but it doesn't work. ### Steps to reproduce the bug 1. pick a dataset 2. write a map function 3. do `ds.map(..., cache_file_name='some_filename')` 4. it crashes ### Expected behavior It will tell you t...
ChenchaoZhao
https://github.com/huggingface/datasets/issues/6603
null
false
2,089,217,483
6,602
Index error when data is large
open
[ "I'm facing this problem while doing my translation of [mteb/stackexchange-clustering](https://huggingface.co/datasets/mteb/stackexchange-clustering). each row has lots of samples (up to 100k samples), because in this dataset, each row represent multiple clusters.\nmy hack is to setting `max_shard_size` to 20Gb or ...
2024-01-18T23:00:47
2025-04-16T04:13:01
null
### Describe the bug At `save_to_disk` step, the `max_shard_size` by default is `500MB`. However, one row of the dataset might be larger than `500MB` then the saving will throw an index error. Without looking at the source code, the bug is due to wrong calculation of number of shards which i think is `total_size / m...
ChenchaoZhao
https://github.com/huggingface/datasets/issues/6602
null
false
2,088,624,054
6,601
add safety checks when using only part of dataset
open
[ "Hi ! The metrics in `datasets` are deprecated in favor of https://github.com/huggingface/evaluate\r\n\r\nYou can open a PR here instead: https://huggingface.co/spaces/evaluate-metric/squad_v2/tree/main" ]
2024-01-18T16:16:59
2024-02-08T14:33:10
null
Added some checks to prevent errors that arrise when using evaluate.py on only a portion of the squad 2.0 dataset.
benseddikismail
https://github.com/huggingface/datasets/pull/6601
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6601", "html_url": "https://github.com/huggingface/datasets/pull/6601", "diff_url": "https://github.com/huggingface/datasets/pull/6601.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6601.patch", "merged_at": null }
true
2,088,446,385
6,600
Loading CSV exported dataset has unexpected format
open
[ "Hi! Parquet is the only format that supports complex/nested features such as `Translation`. So, this should work:\r\n```python\r\ntest_dataset = load_dataset(\"opus100\", name=\"en-fr\", split=\"test\")\r\n\r\n# Save with .to_parquet()\r\ntest_parquet_path = \"try_testset_save.parquet\"\r\ntest_dataset.to_parquet(...
2024-01-18T14:48:27
2024-01-23T14:42:32
null
### Describe the bug I wanted to be able to save a HF dataset for translations and load it again in another script, but I'm a bit confused with the documentation and the result I've got so I'm opening this issue to ask if this behavior is as expected. ### Steps to reproduce the bug The documentation I've mainly cons...
OrianeN
https://github.com/huggingface/datasets/issues/6600
null
false
2,086,684,664
6,599
Easy way to segment into 30s snippets given an m4a file and a vtt file
closed
[ "Hi! Non-generic data processing is out of this library's scope, so it's downstream libraries/users' responsibility to implement such logic.", "That's fair. Thanks" ]
2024-01-17T17:51:40
2024-01-23T10:42:17
2024-01-22T15:35:49
### Feature request Uploading datasets is straightforward thanks to the ability to push Audio to hub. However, it would be nice if the data (text and audio) could be segmented when being pushed (if not possible already). ### Motivation It's easy to create a vtt file from an audio file. If there could be auto-segment...
RonanKMcGovern
https://github.com/huggingface/datasets/issues/6599
null
false
2,084,236,605
6,598
Unexpected keyword argument 'hf' when downloading CSV dataset from S3
closed
[ "I am facing similar issue while reading a csv file from s3. Wondering if somebody has found a workaround. ", "same thing happened to other formats like parquet", "I am facing similar issue while reading a parquet file from s3.\r\ni try with every version between 2.14 to 2.16.1 but it dosen't work ", "Re-def...
2024-01-16T15:16:01
2025-01-31T15:35:33
2024-07-23T14:30:10
### Describe the bug I receive this error message when using `load_dataset` with "csv" path and `dataset_files=s3://...`: ``` TypeError: Session.__init__() got an unexpected keyword argument 'hf' ``` I found a similar issue here: https://stackoverflow.com/questions/77596258/aws-issue-load-dataset-from-s3-fails-w...
dguenms
https://github.com/huggingface/datasets/issues/6598
null
false
2,083,708,521
6,597
Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace
closed
[ "It is caused by these code lines: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/dataset_dict.py#L1688-L1694", "Also note the information in the docstring: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/datase...
2024-01-16T11:27:07
2024-02-05T12:29:37
2024-02-05T12:29:37
While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace. ## Steps to reproduce the bug The command: ```python commit_info = ds.push_to_hub( "caner", config_name="default", commit_message="Convert dataset to Parquet", commit_descriptio...
albertvillanova
https://github.com/huggingface/datasets/issues/6597
null
false
2,083,108,156
6,596
Drop redundant None guard.
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6596). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-01-16T06:31:54
2024-01-16T17:16:16
2024-01-16T17:05:52
`xxx if xxx is not None else None` is no-op.
xkszltl
https://github.com/huggingface/datasets/pull/6596
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6596", "html_url": "https://github.com/huggingface/datasets/pull/6596", "diff_url": "https://github.com/huggingface/datasets/pull/6596.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6596.patch", "merged_at": "2024-01-16T17:05...
true
2,082,896,148
6,595
Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2
closed
[ "Hi ! I think the issue comes from the \"float16\" features that are not supported yet in Parquet\r\n\r\nFeel free to open an issue in `pyarrow` about this. In the meantime, I'd encourage you to use \"float32\" for your \"pooled_prompt_embeds\" and \"prompt_embeds\" features.\r\n\r\nYou can cast them to \"float32\"...
2024-01-16T02:03:09
2024-01-27T18:26:33
2024-01-26T02:28:32
### Describe the bug I'm aware of the issue #5695 . I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16 So i 1. Map dataset 2. Save to disk 3. Try to upload: ``` import data...
kopyl
https://github.com/huggingface/datasets/issues/6595
null
false
2,082,748,275
6,594
IterableDataset sharding logic needs improvement
open
[ "I do not know is it the same probelm as mine. I think the num_workers should a value of process number for one dataloader mapped to one card, or the total number of processes for all multiple cards. \r\nbut when I set the num_workers larger then the count of training split files, it will report num_workers ...
2024-01-15T22:22:36
2024-10-15T06:27:13
null
### Describe the bug The sharding of IterableDatasets with respect to distributed and dataloader worker processes appears problematic with significant performance traps and inconsistencies wrt to distributed train processes vs worker processes. Splitting across num_workers (per train process loader processes) and...
rwightman
https://github.com/huggingface/datasets/issues/6594
null
false
2,082,410,257
6,592
Logs are delayed when doing .map when `docker logs`
closed
[ "Hi! `tqdm` doesn't work well in non-interactive environments, so there isn't much we can do about this. It's best to [disable it](https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/utilities#datasets.disable_progress_bars) in such environments and instead use logging to track progress." ]
2024-01-15T17:05:21
2024-02-12T17:35:21
2024-02-12T17:35:21
### Describe the bug When I run my SD training in a Docker image and then listen to logs like `docker logs train -f`, the progress bar is delayed. It's updating every few percent. When you have a large dataset that has to be mapped (like 1+ million samples), it's crucial to see the updates in real-time, not every co...
kopyl
https://github.com/huggingface/datasets/issues/6592
null
false
2,082,378,957
6,591
The datasets models housed in Dropbox can't support a lot of users downloading them
closed
[ "Hi! Indeed, Dropbox is not a reliable host. I've just merged https://huggingface.co/datasets/PolyAI/minds14/discussions/24 to fix this by hosting the data files inside the repo." ]
2024-01-15T16:43:38
2024-01-22T23:18:09
2024-01-22T23:18:09
### Describe the bug I'm using the datasets ``` from datasets import load_dataset, Audio dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` And it seems that sometimes when I imagine a lot of users are accessing the same resources, the Dropbox host fails: `raise ConnectionError(...
RDaneelOlivav
https://github.com/huggingface/datasets/issues/6591
null
false
2,082,000,084
6,590
Feature request: Multi-GPU dataset mapping for SDXL training
open
[]
2024-01-15T13:06:06
2024-01-15T13:07:07
null
### Feature request We need to speed up SDXL dataset pre-process. Please make it possible to use multiple GPUs for the [official SDXL trainer](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py) :) ### Motivation Pre-computing 3 million of images takes around ...
kopyl
https://github.com/huggingface/datasets/issues/6590
null
false
2,081,358,619
6,589
After `2.16.0` version, there are `PermissionError` when users use shared cache_dir
closed
[ "We'll do a new release of `datasets` in the coming days with a fix !", "@lhoestq Thank you very much!" ]
2024-01-15T06:46:27
2024-02-02T07:55:38
2024-01-30T15:28:38
### Describe the bug - We use shared `cache_dir` using `HF_HOME="{shared_directory}"` - After dataset version 2.16.0, datasets uses `filelock` package for file locking #6445 - But, `filelock` package make `.lock` file with `644` permission - Dataset is not available to other users except the user who created the ...
minhopark-neubla
https://github.com/huggingface/datasets/issues/6589
null
false
2,081,284,253
6,588
fix os.listdir return name is empty string
closed
[]
2024-01-15T05:34:36
2024-01-24T10:08:29
2024-01-24T10:08:29
### Describe the bug xlistdir return name is empty string Overloaded os.listdir ### Steps to reproduce the bug ```python from datasets.download.streaming_download_manager import xjoin from datasets.download.streaming_download_manager import xlistdir config = DownloadConfig(storage_options=options) manger = Str...
d710055071
https://github.com/huggingface/datasets/issues/6588
null
false
2,080,348,016
6,587
Allow concatenation of datasets with mixed structs
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6587). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "friendly bump", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<...
2024-01-13T15:33:20
2024-02-15T15:20:06
2024-02-08T14:38:32
Fixes #6466 The idea is to do a recursive check for structs. PyArrow handles it well enough. For a demo you can do: ```python from datasets import Dataset, concatenate_datasets ds = Dataset.from_dict({'speaker': [{'name': 'Ben', 'email': None}]}) ds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'e...
Dref360
https://github.com/huggingface/datasets/pull/6587
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6587", "html_url": "https://github.com/huggingface/datasets/pull/6587", "diff_url": "https://github.com/huggingface/datasets/pull/6587.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6587.patch", "merged_at": "2024-02-08T14:38...
true
2,079,192,651
6,586
keep more info in DatasetInfo.from_merge #6585
closed
[ "@JochenSiegWork fyi, that seems to also affect the `trainer.push_to_hub()` method, which I guess also needs to parse that DatasetInfo from the `kwargs` used by `push_to_hub`.\r\nThere is short discussion about it [here](https://github.com/huggingface/blog/issues/1623).\r\nWould be great if you can check if your PR...
2024-01-12T16:08:16
2024-01-26T15:59:35
2024-01-26T15:53:28
* try not to merge DatasetInfos if they're equal * fixes losing DatasetInfo during parallel Dataset.map
JochenSiegWork
https://github.com/huggingface/datasets/pull/6586
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6586", "html_url": "https://github.com/huggingface/datasets/pull/6586", "diff_url": "https://github.com/huggingface/datasets/pull/6586.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6586.patch", "merged_at": "2024-01-26T15:53...
true
2,078,874,005
6,585
losing DatasetInfo in Dataset.map when num_proc > 1
open
[ "Hi ! This issue comes from the fact that `map()` with `num_proc>1` shards the dataset in multiple chunks to be processed (one per process) and merges them. The DatasetInfos of each chunk are then merged together, but for some fields like `dataset_name` it's not been implemented and default to None.\r\n\r\nThe Data...
2024-01-12T13:39:19
2024-01-12T14:08:24
null
### Describe the bug Hello and thanks for developing this package! When I process a Dataset with the map function using multiple processors some set attributes of the DatasetInfo get lost and are None in the resulting Dataset. ### Steps to reproduce the bug ```python from datasets import Dataset, DatasetInfo...
JochenSiegWork
https://github.com/huggingface/datasets/issues/6585
null
false
2,078,454,878
6,584
np.fromfile not supported
open
[ "@lhoestq\r\nCan you provide me with some ideas?", "Hi ! What's the error ?", "@lhoestq \r\n```\r\nTraceback (most recent call last):\r\n File \"/home/dongzf/miniconda3/envs/dataset_ai/lib/python3.11/runpy.py\", line 198, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n ^^...
2024-01-12T09:46:17
2024-01-15T05:20:50
null
How to do np.fromfile to use it like np.load ```python def xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs): import numpy as np if hasattr(filepath_or_buffer, "read"): return np.fromfile(filepath_or_buffer, *args, **kwargs) else: ...
d710055071
https://github.com/huggingface/datasets/issues/6584
null
false
2,077,049,491
6,583
remove eli5 test
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6583). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-01-11T16:05:20
2024-01-11T16:15:34
2024-01-11T16:09:24
since the dataset is defunct
lhoestq
https://github.com/huggingface/datasets/pull/6583
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6583", "html_url": "https://github.com/huggingface/datasets/pull/6583", "diff_url": "https://github.com/huggingface/datasets/pull/6583.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6583.patch", "merged_at": "2024-01-11T16:09...
true
2,076,072,101
6,582
Fix for Incorrect ex_iterable used with multi num_worker
closed
[ "A toy example to reveal the bug.\r\n\r\n```python\r\n\"\"\"\r\nDATASETS_VERBOSITY=debug torchrun --nproc-per-node 2 main.py \r\n\"\"\"\r\nimport torch.utils.data\r\nimport torch.distributed\r\nimport datasets.distributed\r\nimport datasets\r\n\r\n# num shards = 4\r\nshards = [(0, 100), (100, 200), (200, 300), (300...
2024-01-11T08:49:43
2024-03-01T19:09:14
2024-03-01T19:02:33
Corrects an issue where `self._ex_iterable` was erroneously used instead of `ex_iterable`, when both Distributed Data Parallel (DDP) and multi num_worker are used concurrently. This improper usage led to the generation of incorrect `shards_indices`, subsequently causing issues with the control flow responsible for work...
kq-chen
https://github.com/huggingface/datasets/pull/6582
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6582", "html_url": "https://github.com/huggingface/datasets/pull/6582", "diff_url": "https://github.com/huggingface/datasets/pull/6582.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6582.patch", "merged_at": "2024-03-01T19:02...
true
2,075,919,265
6,581
fix os.listdir return name is empty string
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6581). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "\r\nObj [\"name\"] ends with \"/\"", "@lhoestq \r\n\r\nhello,\r\nCan you help me chec...
2024-01-11T07:10:55
2024-01-24T10:14:43
2024-01-24T10:08:28
fix #6588 xlistdir return name is empty string for example: ` from datasets.download.streaming_download_manager import xjoin from datasets.download.streaming_download_manager import xlistdir config = DownloadConfig(storage_options=options) manger = StreamingDownloadManager("ILSVRC2012",download_config=config...
d710055071
https://github.com/huggingface/datasets/pull/6581
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6581", "html_url": "https://github.com/huggingface/datasets/pull/6581", "diff_url": "https://github.com/huggingface/datasets/pull/6581.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6581.patch", "merged_at": "2024-01-24T10:08...
true
2,075,645,042
6,580
dataset cache only stores one config of the dataset in parquet dir, and uses that for all other configs resulting in showing same data in all configs.
closed
[]
2024-01-11T03:14:18
2024-01-20T12:46:16
2024-01-20T12:46:16
### Describe the bug ds = load_dataset("ai2_arc", "ARC-Easy"), i have tried to force redownload, delete cache and changing the cache dir. ### Steps to reproduce the bug dataset = [] dataset_name = "ai2_arc" possible_configs = [ 'ARC-Challenge', 'ARC-Easy' ] for config in possible_configs: data...
kartikgupta321
https://github.com/huggingface/datasets/issues/6580
null
false
2,075,407,473
6,579
Unable to load `eli5` dataset with streaming
closed
[ "Hi @haok1402, I have created an issue in the Discussion tab of the corresponding dataset: https://huggingface.co/datasets/eli5/discussions/7\r\nLet's continue the discussion there!" ]
2024-01-10T23:44:20
2024-01-11T09:19:18
2024-01-11T09:19:17
### Describe the bug Unable to load `eli5` dataset with streaming. ### Steps to reproduce the bug This fails with FileNotFoundError: https://files.pushshift.io/reddit/submissions ``` from datasets import load_dataset load_dataset("eli5", streaming=True) ``` This works correctly. ``` from datasets import lo...
haok1402
https://github.com/huggingface/datasets/issues/6579
null
false
2,074,923,321
6,578
Faster webdataset streaming
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6578). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I added faster streaming support using streaming Requests instances in `huggingface_hub...
2024-01-10T18:18:09
2024-01-30T18:46:02
2024-01-30T18:39:51
requests.get(..., streaming=True) is faster than using HTTP range requests when streaming large TAR files it can be enabled using block_size=0 in fsspec cc @rwightman
lhoestq
https://github.com/huggingface/datasets/pull/6578
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6578", "html_url": "https://github.com/huggingface/datasets/pull/6578", "diff_url": "https://github.com/huggingface/datasets/pull/6578.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6578.patch", "merged_at": "2024-01-30T18:39...
true
2,074,790,848
6,577
502 Server Errors when streaming large dataset
closed
[ "cc @mariosasko @lhoestq ", "Hi! We should be able to avoid this error by retrying to read the data when it happens. I'll open a PR in `huggingface_hub` to address this.", "Thanks for the fix @mariosasko! Just wondering whether \"500 error\" should also be excluded? I got these errors overnight:\r\n\r\n```\r\nh...
2024-01-10T16:59:36
2024-02-12T11:46:03
2024-01-15T16:05:44
### Describe the bug When streaming a [large ASR dataset](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set) from the Hug (~3TB) I often encounter 502 Server Errors seemingly randomly during streaming: ``` huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: http...
sanchit-gandhi
https://github.com/huggingface/datasets/issues/6577
null
false
2,073,710,124
6,576
document page 404 not found after redirection
closed
[ "Thanks for reporting! I've opened a PR with a fix." ]
2024-01-10T06:48:14
2024-01-17T14:01:31
2024-01-17T14:01:31
### Describe the bug The redirected page encountered 404 not found. ### Steps to reproduce the bug 1. In this tutorial: https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt original md: https://github.com/huggingface/course/blob/2c733c2246b8b7e0e6f19a9e5d15bb12df43b2a3/chapters/en/chapter5/4.mdx#L49 `...
annahung31
https://github.com/huggingface/datasets/issues/6576
null
false
2,072,617,406
6,575
[IterableDataset] Fix `drop_last_batch`in map after shuffling or sharding
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6575). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-01-09T15:35:31
2024-01-11T16:16:54
2024-01-11T16:10:30
It was not taken into account e.g. when passing to a DataLoader with num_workers>0 Fix https://github.com/huggingface/datasets/issues/6565
lhoestq
https://github.com/huggingface/datasets/pull/6575
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6575", "html_url": "https://github.com/huggingface/datasets/pull/6575", "diff_url": "https://github.com/huggingface/datasets/pull/6575.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6575.patch", "merged_at": "2024-01-11T16:10...
true
2,072,579,549
6,574
Fix tests based on datasets that used to have scripts
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6574). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-01-09T15:16:16
2024-01-09T16:11:33
2024-01-09T16:05:13
...now that `squad` and `paws` don't have a script anymore
lhoestq
https://github.com/huggingface/datasets/pull/6574
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6574", "html_url": "https://github.com/huggingface/datasets/pull/6574", "diff_url": "https://github.com/huggingface/datasets/pull/6574.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6574.patch", "merged_at": "2024-01-09T16:05...
true
2,072,553,951
6,573
[WebDataset] Audio support and bug fixes
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6573). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-01-09T15:03:04
2024-01-11T16:17:28
2024-01-11T16:11:04
- Add audio support - Fix an issue where user-provided features with additional fields are not taken into account Close https://github.com/huggingface/datasets/issues/6569
lhoestq
https://github.com/huggingface/datasets/pull/6573
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6573", "html_url": "https://github.com/huggingface/datasets/pull/6573", "diff_url": "https://github.com/huggingface/datasets/pull/6573.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6573.patch", "merged_at": "2024-01-11T16:11...
true
2,072,384,281
6,572
Adding option for multipart achive download
closed
[ "On closer examination, this appears to be unnecessary. " ]
2024-01-09T13:35:44
2024-02-25T08:13:01
2024-02-25T08:13:01
Right now we can only download multiple separate archives or a single file archive, but not multipart archives, such as those produced by `tar --multi-volume`. This PR allows for downloading and extraction of archives split into multiple parts. With the new `multi_part` field of the `DownloadConfig` set, the downloa...
jpodivin
https://github.com/huggingface/datasets/pull/6572
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6572", "html_url": "https://github.com/huggingface/datasets/pull/6572", "diff_url": "https://github.com/huggingface/datasets/pull/6572.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6572.patch", "merged_at": null }
true
2,072,111,000
6,571
Make DatasetDict.column_names return a list instead of dict
open
[ "Hi @albertvillanova, can I work on this issue?" ]
2024-01-09T10:45:17
2025-09-22T08:47:53
null
Currently, `DatasetDict.column_names` returns a dict, with each split name as keys and the corresponding list of column names as values. However, by construction, all splits have the same column names. I think it makes more sense to return a single list with the column names, which is the same for all the split k...
albertvillanova
https://github.com/huggingface/datasets/issues/6571
null
false
2,071,805,265
6,570
No online docs for 2.16 release
closed
[ "Though the `build / build_main_documentation` CI job ran for 2.16.0: https://github.com/huggingface/datasets/actions/runs/7300836845/job/19896275099 🤔 ", "Yes, I saw it. Maybe @mishig25 can give us some hint...", "fixed https://huggingface.co/docs/datasets/v2.16.0/en/index", "Still missing 2.16.1.", "> St...
2024-01-09T07:43:30
2024-01-09T16:45:50
2024-01-09T16:45:50
We do not have the online docs for the latest minor release 2.16 (2.16.0 nor 2.16.1). In the online docs, the latest version appearing is 2.15.0: https://huggingface.co/docs/datasets/index ![Screenshot from 2024-01-09 08-43-08](https://github.com/huggingface/datasets/assets/8515462/83613222-867f-41f4-8833-7a4a765...
albertvillanova
https://github.com/huggingface/datasets/issues/6570
null
false
2,070,251,122
6,569
WebDataset ignores features defined in YAML or passed to load_dataset
closed
[]
2024-01-08T11:24:21
2024-01-11T16:11:06
2024-01-11T16:11:05
we should not override if the features exist already https://github.com/huggingface/datasets/blob/d26abadce0b884db32382b92422d8a6aa997d40a/src/datasets/packaged_modules/webdataset/webdataset.py#L78-L85
lhoestq
https://github.com/huggingface/datasets/issues/6569
null
false
2,069,922,151
6,568
keep_in_memory=True does not seem to work
open
[ "Seems like I just used the old code which did not have `keep_in_memory=True` argument, sorry.\r\n\r\nAlthough i encountered a different problem – at 97% my python process just hung for around 11 minutes with no logs (when running dataset.map without `keep_in_memory=True` over around 3 million of dataset samples).....
2024-01-08T08:03:58
2024-01-13T04:53:04
null
UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :(
kopyl
https://github.com/huggingface/datasets/issues/6568
null
false
2,069,808,842
6,567
AttributeError: 'str' object has no attribute 'to'
closed
[ "I think you are reporting an issue with the `transformers` library. Note this is the repository of the `datasets` library. I recommend that you open an issue in their repository: https://github.com/huggingface/transformers/issues\r\n\r\nEDIT: I have not the rights to transfer the issue\r\n~~I am transferring your ...
2024-01-08T06:40:21
2024-01-08T11:56:19
2024-01-08T10:03:17
### Describe the bug ``` -------------------------------------------------------------------------- AttributeError Traceback (most recent call last) [<ipython-input-6-80c6086794e8>](https://localhost:8080/#) in <cell line: 10>() 8 report_to="wandb") 9 ---> 10 trainer =...
andysingal
https://github.com/huggingface/datasets/issues/6567
null
false
2,069,495,429
6,566
I train controlnet_sdxl in bf16 datatype, got unsupported ERROR in datasets
closed
[ "I also see the same error and get passed it by casting that line to float. \r\n\r\nso `for x in obj.detach().cpu().numpy()` becomes `for x in obj.detach().to(torch.float).cpu().numpy()`\r\n\r\nI got the idea from [this ](https://github.com/kohya-ss/sd-webui-additional-networks/pull/128/files) PR where someone was...
2024-01-08T02:37:03
2024-06-02T14:24:39
2024-05-17T09:40:14
### Describe the bug ``` Traceback (most recent call last): File "train_controlnet_sdxl.py", line 1252, in <module> main(args) File "train_controlnet_sdxl.py", line 1013, in main train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) File "/home/mini...
HelloWorldBeginner
https://github.com/huggingface/datasets/issues/6566
null
false
2,068,939,670
6,565
`drop_last_batch=True` for IterableDataset map function is ignored with multiprocessing DataLoader
closed
[ "My current workaround this issue is to return `None` in the second element and then filter out samples which have `None` in them.\r\n\r\n```python\r\ndef merge_samples(batch):\r\n if len(batch['a']) == 1:\r\n batch['c'] = [batch['a'][0]]\r\n batch['d'] = [None]\r\n else:\r\n batch['c'] ...
2024-01-07T02:46:50
2025-03-08T09:46:05
2024-01-11T16:10:31
### Describe the bug Scenario: - Interleaving two iterable datasets of unequal lengths (`all_exhausted`), followed by a batch mapping with batch size 2 to effectively merge the two datasets and get a sample from each dataset in a single batch, with `drop_last_batch=True` to skip the last batch in case it doesn't ha...
naba89
https://github.com/huggingface/datasets/issues/6565
null
false
2,068,893,194
6,564
`Dataset.filter` missing `with_rank` parameter
closed
[ "Thanks for reporting! I've opened a PR with a fix", "@mariosasko thank you very much :)" ]
2024-01-06T23:48:13
2024-01-29T16:36:55
2024-01-29T16:36:54
### Describe the bug The issue shall be open: https://github.com/huggingface/datasets/issues/6435 When i try to pass `with_rank` to `Dataset.filter()`, i get this: `Dataset.filter() got an unexpected keyword argument 'with_rank'` ### Steps to reproduce the bug Run notebook: https://colab.research.google.com...
kopyl
https://github.com/huggingface/datasets/issues/6564
null
false
2,068,302,402
6,563
`ImportError`: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (.../huggingface_hub/utils/__init__.py)
closed
[ "@Wauplin Do you happen to know what's up?", "<del>Installing `datasets` from `main` did the trick so I guess it will be fixed in the next release.\r\n\r\nNVM https://github.com/huggingface/datasets/blob/d26abadce0b884db32382b92422d8a6aa997d40a/src/datasets/utils/info_utils.py#L5", "@wasertech upgrading `huggin...
2024-01-06T02:28:54
2024-03-14T02:59:42
2024-01-06T16:13:27
### Describe the bug Yep its not [there](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/__init__.py) anymore. ```text + python /home/trainer/sft_train.py --model_name cognitivecomputations/dolphin-2.2.1-mistral-7b --dataset_name wasertech/OneOS --load_in_4bit --use_peft --batch_...
wasertech
https://github.com/huggingface/datasets/issues/6563
null
false
2,067,904,504
6,562
datasets.DownloadMode.FORCE_REDOWNLOAD use cache to download dataset features with load_dataset function
open
[]
2024-01-05T19:10:25
2024-01-05T19:10:25
null
### Describe the bug I have updated my dataset by adding a new feature, and push it to the hub. When I want to download it on my machine which contain the old version by using `datasets.load_dataset("your_dataset_name", download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` I get an error (paste bellow). Seems that...
LsTam91
https://github.com/huggingface/datasets/issues/6562
null
false
2,067,404,951
6,561
Document YAML configuration with "data_dir"
open
[ "In particular, I would like to have an example of how to replace the following configuration (from https://huggingface.co/docs/hub/datasets-manual-configuration#splits)\r\n\r\n```\r\n---\r\nconfigs:\r\n- config_name: default\r\n data_files:\r\n - split: train\r\n path: \"data/*.csv\"\r\n - split: test\r\n ...
2024-01-05T14:03:33
2025-08-07T14:57:58
null
See https://huggingface.co/datasets/uonlp/CulturaX/discussions/15#6597e83f185db94370d6bf50 for reference
severo
https://github.com/huggingface/datasets/issues/6561
null
false
2,065,637,625
6,560
Support Video
closed
[ "duplicate of #5225" ]
2024-01-04T13:10:58
2024-08-23T09:51:27
2024-08-23T09:51:27
### Feature request HF datasets are awesome in supporting text and images. Will be great to see such a support in videos :) ### Motivation Video generation :) ### Your contribution Will probably be limited to raising this feature request ;)
yuvalkirstain
https://github.com/huggingface/datasets/issues/6560
null
false
2,065,118,332
6,559
Latest version 2.16.1, when load dataset error occurs. ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default']
closed
[ "Hi ! The \"allenai--c4\" config doesn't exist (this naming schema comes from old versions of `datasets`)\r\n\r\nYou can load it this way instead:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ncache_dir = 'path/to/your/cache/directory'\r\ndataset = load_dataset('allenai/c4', data_files={'train': 'en/c4-t...
2024-01-04T07:04:48
2024-04-03T10:40:53
2024-01-05T01:26:25
### Describe the bug python script is: ``` from datasets import load_dataset cache_dir = 'path/to/your/cache/directory' dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir) ``` the script su...
zhulinJulia24
https://github.com/huggingface/datasets/issues/6559
null
false
2,064,885,984
6,558
OSError: image file is truncated (1 bytes not processed) #28323
closed
[ "You can add \r\n\r\n```python\r\nfrom PIL import ImageFile\r\nImageFile.LOAD_TRUNCATED_IMAGES = True\r\n```\r\n\r\nafter the imports to be able to read truncated images." ]
2024-01-04T02:15:13
2024-02-21T00:38:12
2024-02-21T00:38:12
### Describe the bug ``` --------------------------------------------------------------------------- OSError Traceback (most recent call last) Cell In[24], line 28 23 return example 25 # Filter the dataset 26 # filtered_dataset = dataset.filter(contains_number...
andysingal
https://github.com/huggingface/datasets/issues/6558
null
false
2,064,341,965
6,557
Support standalone yaml
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6557). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@lhoestq \r\nhello\r\nI think it should be defined in config.py\r\nDATASET_ README_ FIL...
2024-01-03T16:47:35
2024-01-11T17:59:51
2024-01-11T17:53:42
see (internal) https://huggingface.slack.com/archives/C02V51Q3800/p1703885853581679
lhoestq
https://github.com/huggingface/datasets/pull/6557
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6557", "html_url": "https://github.com/huggingface/datasets/pull/6557", "diff_url": "https://github.com/huggingface/datasets/pull/6557.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6557.patch", "merged_at": "2024-01-11T17:53...
true
2,064,018,208
6,556
Fix imagefolder with one image
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6556). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Fixed in dataset viewer: https://huggingface.co/datasets/multimodalart/repro_1_image\r\...
2024-01-03T13:13:02
2024-02-12T21:57:34
2024-01-09T13:06:30
A dataset repository with one image and one metadata file was considered a JSON dataset instead of an ImageFolder dataset. This is because we pick the dataset type with the most compatible data file extensions present in the repository and it results in a tie in this case. e.g. for https://huggingface.co/datasets/mu...
lhoestq
https://github.com/huggingface/datasets/pull/6556
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6556", "html_url": "https://github.com/huggingface/datasets/pull/6556", "diff_url": "https://github.com/huggingface/datasets/pull/6556.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6556.patch", "merged_at": "2024-01-09T13:06...
true
2,063,841,286
6,555
Do not use Parquet exports if revision is passed
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6555). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "As shared on slack, `HubDatasetModuleFactoryWithParquetExport` raises a `DatasetsServer...
2024-01-03T11:33:10
2024-02-02T10:41:33
2024-02-02T10:35:28
Fix #6554.
albertvillanova
https://github.com/huggingface/datasets/pull/6555
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6555", "html_url": "https://github.com/huggingface/datasets/pull/6555", "diff_url": "https://github.com/huggingface/datasets/pull/6555.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6555.patch", "merged_at": "2024-02-02T10:35...
true
2,063,839,916
6,554
Parquet exports are used even if revision is passed
closed
[ "I don't think this bug is a thing ? Do you have some code that leads to this issue ?" ]
2024-01-03T11:32:26
2024-02-02T10:35:29
2024-02-02T10:35:29
We should not used Parquet exports if `revision` is passed. I think this is a regression.
albertvillanova
https://github.com/huggingface/datasets/issues/6554
null
false
2,063,474,183
6,553
Cannot import name 'load_dataset' from .... module ‘datasets’
closed
[ "I don't know My conpany conputer cannot work. but in my computer, it work?", "Do you have a folder in your working directory called datasets?" ]
2024-01-03T08:18:21
2024-02-21T00:38:24
2024-02-21T00:38:24
### Describe the bug use python -m pip install datasets to install ### Steps to reproduce the bug from datasets import load_dataset ### Expected behavior it doesn't work ### Environment info datasets version==2.15.0 python == 3.10.12 linux version I don't know??
ciaoyizhen
https://github.com/huggingface/datasets/issues/6553
null
false