id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
βŒ€
body
stringlengths
0
228k
βŒ€
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
2,549,738,919
7,171
CI is broken: No solution found when resolving dependencies
closed
[]
2024-09-26T07:24:58
2024-09-26T08:05:41
2024-09-26T08:05:41
See: https://github.com/huggingface/datasets/actions/runs/11046967444/job/30687294297 ``` Run uv pip install --system -r additional-tests-requirements.txt --no-deps Γ— No solution found when resolving dependencies: ╰─▢ Because the current Python version (3.8.18) does not satisfy Python>=3.9 and torchdata=...
albertvillanova
https://github.com/huggingface/datasets/issues/7171
null
false
2,546,944,016
7,170
Support JSON lines with missing columns
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7170). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-09-25T05:08:15
2024-09-26T06:42:09
2024-09-26T06:42:07
Support JSON lines with missing columns. Fix #7169. The implemented test raised: ``` datasets.table.CastError: Couldn't cast age: int64 to {'age': Value(dtype='int32', id=None), 'name': Value(dtype='string', id=None)} because column names don't match ``` Related to: - #7160 - #7162
albertvillanova
https://github.com/huggingface/datasets/pull/7170
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7170", "html_url": "https://github.com/huggingface/datasets/pull/7170", "diff_url": "https://github.com/huggingface/datasets/pull/7170.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7170.patch", "merged_at": "2024-09-26T06:42...
true
2,546,894,076
7,169
JSON lines with missing columns raise CastError
closed
[]
2024-09-25T04:43:28
2024-09-26T06:42:08
2024-09-26T06:42:08
JSON lines with missing columns raise CastError: > CastError: Couldn't cast ... to ... because column names don't match Related to: - #7159 - #7161
albertvillanova
https://github.com/huggingface/datasets/issues/7169
null
false
2,546,710,631
7,168
sd1.5 diffusers controlnet training script gives new error
closed
[ "not sure why the issue is formatting oddly", "I guess this is a dupe of\r\n\r\nhttps://github.com/huggingface/datasets/issues/7071", "this turned out to be because of a bad image in dataset", "@Night1099 could you spiecify what exactly was wrong with your image in the dataset? I think im facing the same issu...
2024-09-25T01:42:49
2025-09-16T15:38:01
2024-09-30T05:24:02
### Describe the bug This will randomly pop up during training now ``` Traceback (most recent call last): File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1192, in <module> main(args) File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1041, in main ...
Night1099
https://github.com/huggingface/datasets/issues/7168
null
false
2,546,708,014
7,167
Error Mapping on sd3, sdxl and upcoming flux controlnet training scripts in diffusers
closed
[ "this is happening on large datasets, if anyone happens upon this i was able to fix by changing\r\n\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)\r\n```\r\n\r\nto\r\n\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True, ...
2024-09-25T01:39:51
2024-09-30T05:28:15
2024-09-30T05:28:04
### Describe the bug ``` Map: 6%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 8000/138120 [19:27<5:16:36, 6.85 examples/s] Traceback (most recent call last): File "/workspace/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1416, in <mod...
Night1099
https://github.com/huggingface/datasets/issues/7167
null
false
2,545,608,736
7,166
fix docstring code example for distributed shuffle
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7166). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-09-24T14:39:54
2024-09-24T14:42:41
2024-09-24T14:40:14
close https://github.com/huggingface/datasets/issues/7163
lhoestq
https://github.com/huggingface/datasets/pull/7166
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7166", "html_url": "https://github.com/huggingface/datasets/pull/7166", "diff_url": "https://github.com/huggingface/datasets/pull/7166.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7166.patch", "merged_at": "2024-09-24T14:40...
true
2,544,972,541
7,165
fix increase_load_count
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7165). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I tested a few load_dataset and they do show up in download stats now", "Thanks for h...
2024-09-24T10:14:40
2024-09-24T17:31:07
2024-09-24T13:48:00
it was failing since 3.0 and therefore not updating download counts on HF or in our dashboard
lhoestq
https://github.com/huggingface/datasets/pull/7165
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7165", "html_url": "https://github.com/huggingface/datasets/pull/7165", "diff_url": "https://github.com/huggingface/datasets/pull/7165.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7165.patch", "merged_at": "2024-09-24T13:48...
true
2,544,757,297
7,164
fsspec.exceptions.FSTimeoutError when downloading dataset
closed
[ "Hi ! If you check the dataset loading script [here](https://huggingface.co/datasets/openslr/librispeech_asr/blob/main/librispeech_asr.py) you'll see that it downloads the data from OpenSLR, and apparently their storage has timeout issues. It would be great to ultimately host the dataset on Hugging Face instead.\r\...
2024-09-24T08:45:05
2025-07-28T14:58:49
2025-07-28T14:58:49
### Describe the bug I am trying to download the `librispeech_asr` `clean` dataset, which results in a `FSTimeoutError` exception after downloading around 61% of the data. ### Steps to reproduce the bug ``` import datasets datasets.load_dataset("librispeech_asr", "clean") ``` The output is as follows: > Dow...
timonmerk
https://github.com/huggingface/datasets/issues/7164
null
false
2,542,361,234
7,163
Set explicit seed in iterable dataset ddp shuffling example
closed
[ "thanks for reporting !" ]
2024-09-23T11:34:06
2024-09-24T14:40:15
2024-09-24T14:40:15
### Describe the bug In the examples section of the iterable dataset docs https://huggingface.co/docs/datasets/en/package_reference/main_classes#datasets.IterableDataset the ddp example shuffles without seeding ```python from datasets.distributed import split_dataset_by_node ids = ds.to_iterable_dataset(num_sh...
alex-hh
https://github.com/huggingface/datasets/issues/7163
null
false
2,542,323,382
7,162
Support JSON lines with empty struct
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7162). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-09-23T11:16:12
2024-09-23T11:30:08
2024-09-23T11:30:06
Support JSON lines with empty struct. Fix #7161. Related to: - #7160
albertvillanova
https://github.com/huggingface/datasets/pull/7162
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7162", "html_url": "https://github.com/huggingface/datasets/pull/7162", "diff_url": "https://github.com/huggingface/datasets/pull/7162.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7162.patch", "merged_at": "2024-09-23T11:30...
true
2,541,971,931
7,161
JSON lines with empty struct raise ArrowTypeError
closed
[]
2024-09-23T08:48:56
2024-09-25T04:43:44
2024-09-23T11:30:07
JSON lines with empty struct raise ArrowTypeError: struct fields don't match or are in the wrong order See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5 > ArrowTypeError: struct fields don't match or are in the wrong order: Input fields: struct<> output fields: struct<pov_c...
albertvillanova
https://github.com/huggingface/datasets/issues/7161
null
false
2,541,877,813
7,160
Support JSON lines with missing struct fields
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7160). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-09-23T08:04:09
2024-09-23T11:09:19
2024-09-23T11:09:17
Support JSON lines with missing struct fields. Fix #7159. The implemented test raised: ``` TypeError: Couldn't cast array of type struct<age: int64> to {'age': Value(dtype='int32', id=None), 'name': Value(dtype='string', id=None)} ```
albertvillanova
https://github.com/huggingface/datasets/pull/7160
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7160", "html_url": "https://github.com/huggingface/datasets/pull/7160", "diff_url": "https://github.com/huggingface/datasets/pull/7160.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7160.patch", "merged_at": "2024-09-23T11:09...
true
2,541,865,613
7,159
JSON lines with missing struct fields raise TypeError: Couldn't cast array
closed
[ "Hello,\r\n\r\nI have still the same issue when loading the dataset with the new version:\r\n[https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5](https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5)\r\n\r\nI have downloaded and unzipped the wikimedia/structured-wik...
2024-09-23T07:57:58
2024-10-21T08:07:07
2024-09-23T11:09:18
JSON lines with missing struct fields raise TypeError: Couldn't cast array of type. See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5 One would expect that the struct missing fields are added with null values.
albertvillanova
https://github.com/huggingface/datasets/issues/7159
null
false
2,541,494,765
7,158
google colab ex
closed
[]
2024-09-23T03:29:50
2024-12-20T16:41:07
2024-12-20T16:41:07
null
docfhsp
https://github.com/huggingface/datasets/pull/7158
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7158", "html_url": "https://github.com/huggingface/datasets/pull/7158", "diff_url": "https://github.com/huggingface/datasets/pull/7158.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7158.patch", "merged_at": null }
true
2,540,354,890
7,157
Fix zero proba interleave datasets
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7157). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-09-21T15:19:14
2024-09-24T14:33:54
2024-09-24T14:33:54
fix https://github.com/huggingface/datasets/issues/7147
lhoestq
https://github.com/huggingface/datasets/pull/7157
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7157", "html_url": "https://github.com/huggingface/datasets/pull/7157", "diff_url": "https://github.com/huggingface/datasets/pull/7157.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7157.patch", "merged_at": null }
true
2,539,360,617
7,156
interleave_datasets resets shuffle state
open
[ "It also does preserve `split_by_node`, so in the meantime you should call `shuffle` or `split_by_node` AFTER `interleave_datasets` or `concatenate_datasets`" ]
2024-09-20T17:57:54
2025-03-18T10:56:25
null
### Describe the bug ``` import datasets import torch.utils.data def gen(shards): yield {"shards": shards} def main(): dataset = datasets.IterableDataset.from_generator( gen, gen_kwargs={'shards': list(range(25))} ) dataset = dataset.shuffle(buffer_size=1) dataset...
jonathanasdf
https://github.com/huggingface/datasets/issues/7156
null
false
2,533,641,870
7,155
Dataset viewer not working! Failure due to more than 32 splits.
closed
[ "I have fixed it! But I would appreciate a new feature wheere I could iterate over and see what each file looks like. " ]
2024-09-18T12:43:21
2024-09-18T13:20:03
2024-09-18T13:20:03
Hello guys, I have a dataset and I didn't know I couldn't upload more than 32 splits. Now, my dataset viewer is not working. I don't have the dataset locally on my node anymore and recreating would take a week. And I have to publish the dataset coming Monday. I read about the practice, how I can resolve it and avoi...
sleepingcat4
https://github.com/huggingface/datasets/issues/7155
null
false
2,532,812,323
7,154
Support ndjson data files
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7154). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thanks for your review, @severo.\r\n\r\nYes, I was aware of this. From internal convers...
2024-09-18T06:10:10
2024-09-19T11:25:17
2024-09-19T11:25:14
Support `ndjson` (Newline Delimited JSON) data files. Fix #7153.
albertvillanova
https://github.com/huggingface/datasets/pull/7154
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7154", "html_url": "https://github.com/huggingface/datasets/pull/7154", "diff_url": "https://github.com/huggingface/datasets/pull/7154.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7154.patch", "merged_at": "2024-09-19T11:25...
true
2,532,788,555
7,153
Support data files with .ndjson extension
closed
[]
2024-09-18T05:54:45
2024-09-19T11:25:15
2024-09-19T11:25:15
### Feature request Support data files with `.ndjson` extension. ### Motivation We already support data files with `.jsonl` extension. ### Your contribution I am opening a PR.
albertvillanova
https://github.com/huggingface/datasets/issues/7153
null
false
2,527,577,048
7,151
Align filename prefix splitting with WebDataset library
closed
[]
2024-09-16T06:07:39
2024-09-16T15:26:36
2024-09-16T15:26:34
Align filename prefix splitting with WebDataset library. This PR uses the same `base_plus_ext` function as the one used by the `webdataset` library. Fix #7150. Related to #7144.
albertvillanova
https://github.com/huggingface/datasets/pull/7151
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7151", "html_url": "https://github.com/huggingface/datasets/pull/7151", "diff_url": "https://github.com/huggingface/datasets/pull/7151.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7151.patch", "merged_at": "2024-09-16T15:26...
true
2,527,571,175
7,150
WebDataset loader splits keys differently than WebDataset library
closed
[]
2024-09-16T06:02:47
2024-09-16T15:26:35
2024-09-16T15:26:35
As reported by @ragavsachdeva (see discussion here: https://github.com/huggingface/datasets/pull/7144#issuecomment-2348307792), our webdataset loader is not aligned with the `webdataset` library when splitting keys from filenames. For example, we get a different key splitting for filename `/some/path/22.0/1.1.png`: ...
albertvillanova
https://github.com/huggingface/datasets/issues/7150
null
false
2,524,497,448
7,149
Datasets Unknown Keyword Argument Error - task_templates
closed
[ "Thanks, for reporting.\r\n\r\nWe have been fixing most Hub datasets to remove the deprecated (and now non-supported) task templates, but we missed the \"facebook/winoground\".\r\n\r\nIt is fixed now: https://huggingface.co/datasets/facebook/winoground/discussions/8\r\n\r\n", "Hello @albertvillanova \r\n\r\nI got...
2024-09-13T10:30:57
2025-03-06T07:11:55
2024-09-13T14:10:48
### Describe the bug Issue ```python from datasets import load_dataset examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>) ``` Gives error ``` TypeError: DatasetInfo.__init__() got an unexpected keyword argument 'task_templates' ``` A simple downgrade to lower `data...
varungupta31
https://github.com/huggingface/datasets/issues/7149
null
false
2,523,833,413
7,148
Bug: Error when downloading mteb/mtop_domain
closed
[ "Could you please try with `force_redownload` instead?\r\nEDIT:\r\n```python\r\ndata = load_dataset(\"mteb/mtop_domain\", \"en\", download_mode=\"force_redownload\")\r\n```", "Seems the error is still there", "I am not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\...
2024-09-13T04:09:39
2024-09-14T15:11:35
2024-09-14T15:11:35
### Describe the bug When downloading the dataset "mteb/mtop_domain", ran into the following error: ``` Traceback (most recent call last): File "/share/project/xzy/test/test_download.py", line 3, in <module> data = load_dataset("mteb/mtop_domain", "en", trust_remote_code=True) File "/opt/conda/lib/pytho...
ZiyiXia
https://github.com/huggingface/datasets/issues/7148
null
false
2,523,129,465
7,147
IterableDataset strange deadlock
closed
[ "Yes `interleave_datasets` seems to have an issue with shuffling, could you open a new issue on this ?\r\n\r\nThen regarding the deadlock, it has to do with interleave_dataset with probabilities=[1, 0] with workers that may contain an empty dataset in first position (it can be empty since you distribute 1024 shard ...
2024-09-12T18:59:33
2024-09-23T09:32:27
2024-09-21T17:37:34
### Describe the bug ``` import datasets import torch.utils.data num_shards = 1024 def gen(shards): for shard in shards: if shard < 25: yield {"shard": shard} def main(): dataset = datasets.IterableDataset.from_generator( gen, gen_kwargs={"shards": lis...
jonathanasdf
https://github.com/huggingface/datasets/issues/7147
null
false
2,519,820,162
7,146
Set dev version
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7146). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-09-11T13:53:27
2024-09-12T04:34:08
2024-09-12T04:34:06
null
albertvillanova
https://github.com/huggingface/datasets/pull/7146
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7146", "html_url": "https://github.com/huggingface/datasets/pull/7146", "diff_url": "https://github.com/huggingface/datasets/pull/7146.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7146.patch", "merged_at": "2024-09-12T04:34...
true
2,519,789,724
7,145
Release: 3.0.0
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7145). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-09-11T13:41:47
2024-09-11T13:48:42
2024-09-11T13:48:41
null
albertvillanova
https://github.com/huggingface/datasets/pull/7145
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7145", "html_url": "https://github.com/huggingface/datasets/pull/7145", "diff_url": "https://github.com/huggingface/datasets/pull/7145.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7145.patch", "merged_at": "2024-09-11T13:48...
true
2,519,393,560
7,144
Fix key error in webdataset
closed
[ "hi ! What version of `datasets` are you using ? Is this issue also happening with `datasets==3.0.0` ?\r\nAsking because we made sure to replicate the official webdataset logic, which is to use the latest dot as separator between the sample base name and the key", "Hi, yes this is still a problem on `datasets==3....
2024-09-11T10:50:17
2025-01-15T10:32:43
2024-09-13T04:31:37
I was running into ``` example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]} KeyError: 'png' ``` The issue is that a filename may have multiple "." e.g. `22.05.png`. Changing `split` to `rsplit` fixes it. Related https://github.com/huggingface/datasets/issues/68...
ragavsachdeva
https://github.com/huggingface/datasets/pull/7144
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7144", "html_url": "https://github.com/huggingface/datasets/pull/7144", "diff_url": "https://github.com/huggingface/datasets/pull/7144.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7144.patch", "merged_at": null }
true
2,512,327,211
7,143
Modify add_column() to optionally accept a FeatureType as param
closed
[ "Requesting review @lhoestq \r\nI will also update the docs if this looks good.", "Cool ! maybe you can rename the argument `feature` and with type `FeatureType` ? This way it would work the same way as `.cast_column()` ?", "@lhoestq Since there is no way to get a `pyarrow.Schema` from a `FeatureType`, I had to...
2024-09-08T10:56:57
2024-09-17T06:01:23
2024-09-16T15:11:01
Fix #7142. **Before (Add + Cast)**: ``` from datasets import load_dataset, Value ds = load_dataset("rotten_tomatoes", split="test") lst = [i for i in range(len(ds))] ds = ds.add_column("new_col", lst) # Assigns int64 to new_col by default print(ds.features) ds = ds.cast_column("new_col", Value(dtype="u...
varadhbhatnagar
https://github.com/huggingface/datasets/pull/7143
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7143", "html_url": "https://github.com/huggingface/datasets/pull/7143", "diff_url": "https://github.com/huggingface/datasets/pull/7143.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7143.patch", "merged_at": "2024-09-16T15:11...
true
2,512,244,938
7,142
Specifying datatype when adding a column to a dataset.
closed
[ "#self-assign" ]
2024-09-08T07:34:24
2024-09-17T03:46:32
2024-09-17T03:46:32
### Feature request There should be a way to specify the datatype of a column in `datasets.add_column()`. ### Motivation To specify a custom datatype, we have to use `datasets.add_column()` followed by `datasets.cast_column()` which is slow for large datasets. Another workaround is to pass a `numpy.array()` of desi...
varadhbhatnagar
https://github.com/huggingface/datasets/issues/7142
null
false
2,510,797,653
7,141
Older datasets throwing safety errors with 2.21.0
closed
[ "I am also getting this error with this dataset: https://huggingface.co/datasets/google/IFEval", "Me too, didn't have this issue few hours ago.", "same observation. I even downgraded `datasets==2.20.0` and `huggingface_hub==0.23.5` leading me to believe it's an issue on the server.\r\n\r\nany known workarounds?...
2024-09-06T16:26:30
2024-09-06T21:14:14
2024-09-06T19:09:29
### Describe the bug The dataset loading was throwing some safety errors for this popular dataset `wmt14`. [in]: ``` import datasets # train_data = datasets.load_dataset("wmt14", "de-en", split="train") train_data = datasets.load_dataset("wmt14", "de-en", split="train") val_data = datasets.load_dataset(...
alvations
https://github.com/huggingface/datasets/issues/7141
null
false
2,508,078,858
7,139
Use load_dataset to load imagenet-1K But find a empty dataset
open
[ "Imagenet-1k is a gated dataset which means you’ll have to agree to share your contact info to access it. Have you tried this yet? Once you have, you can sign in with your user token (you can find this in your Hugging Face account settings) when prompted by running.\r\n\r\n```\r\nhuggingface-cli login\r\ntrain_set...
2024-09-05T15:12:22
2024-10-09T04:02:41
null
### Describe the bug ```python def get_dataset(data_path, train_folder="train", val_folder="val"): traindir = os.path.join(data_path, train_folder) valdir = os.path.join(data_path, val_folder) def transform_val_examples(examples): transform = Compose([ Resize(256), ...
fscdc
https://github.com/huggingface/datasets/issues/7139
null
false
2,507,738,308
7,138
Cache only changed columns?
open
[ "so I guess a workaround to this is to simply remove all columns except the ones to cache and then add them back with `concatenate_datasets(..., axis=1)`.", "yes this is the right workaround. We're keeping the cache like this to make it easier for people to delete intermediate cache files" ]
2024-09-05T12:56:47
2024-09-20T13:27:20
null
### Feature request Cache only the actual changes to the dataset i.e. changed columns. ### Motivation I realized that caching actually saves the complete dataset again. This is especially problematic for image datasets if one wants to only change another column e.g. some metadata and then has to save 5 TB again. #...
Modexus
https://github.com/huggingface/datasets/issues/7138
null
false
2,506,851,048
7,137
[BUG] dataset_info sequence unexpected behavior in README.md YAML
closed
[ "The non-sequence case works well (`dict[str, str]` instead of `list[dict[str, str]]`), which makes me believe it shall be a bug for `sequence` and my proposed behavior shall be expected.\r\n```\r\ndataset_info:\r\n- config_name: default\r\n features:\r\n - name: answers\r\n dtype:\r\n - name: text\r\n ...
2024-09-05T06:06:06
2025-07-07T09:20:29
2025-07-04T19:50:59
### Describe the bug When working on `dataset_info` yaml, I find my data column with format `list[dict[str, str]]` cannot be coded correctly. My data looks like ``` {"answers":[{"text": "ADDRESS", "label": "abc"}]} ``` My `dataset_info` in README.md is: ``` dataset_info: - config_name: default feature...
ain-soph
https://github.com/huggingface/datasets/issues/7137
null
false
2,506,115,857
7,136
Do not consume unnecessary memory during sharding
open
[]
2024-09-04T19:26:06
2024-09-04T19:28:23
null
When sharding `IterableDataset`s, a temporary list is created that is then indexed. There is no need to create a temporary list of a potentially very large step/world size, with standard `islice` functionality, so we avoid it. ```shell pytest tests/test_distributed.py -k iterable ``` Runs successfully.
janEbert
https://github.com/huggingface/datasets/pull/7136
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7136", "html_url": "https://github.com/huggingface/datasets/pull/7136", "diff_url": "https://github.com/huggingface/datasets/pull/7136.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7136.patch", "merged_at": null }
true
2,503,318,328
7,135
Bug: Type Mismatch in Dataset Mapping
open
[ "By the way, following code is working. This show the inconsistentcy.\r\n```python\r\nfrom datasets import Dataset\r\n\r\n# Original data\r\ndata = {\r\n 'text': ['Hello', 'world', 'this', 'is', 'a', 'test'],\r\n 'label': [0, 1, 0, 1, 1, 0]\r\n}\r\n\r\n# Creating a Dataset object\r\ndataset = Dataset.from_dic...
2024-09-03T16:37:01
2024-09-05T14:09:05
null
# Issue: Type Mismatch in Dataset Mapping ## Description There is an issue with the `map` function in the `datasets` library where the mapped output does not reflect the expected type change. After applying a mapping function to convert an integer label to a string, the resulting type remains an integer instead of ...
marko1616
https://github.com/huggingface/datasets/issues/7135
null
false
2,499,484,041
7,134
Attempting to return a rank 3 grayscale image from dataset.map results in extreme slowdown
open
[]
2024-09-01T13:55:41
2024-09-02T10:34:53
null
### Describe the bug Background: Digital images are often represented as a (Height, Width, Channel) tensor. This is the same for huggingface datasets that contain images. These images are loaded in Pillow containers which offer, for example, the `.convert` method. I can convert an image from a (H,W,3) shape to a...
navidmafi
https://github.com/huggingface/datasets/issues/7134
null
false
2,496,474,495
7,133
remove filecheck to enable symlinks
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7133). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "The CI is failing, looks like it breaks imagefolder loading.\r\n\r\nI just checked fssp...
2024-08-30T07:36:56
2024-12-24T14:25:22
2024-12-24T14:25:22
Enables streaming from local symlinks #7083 @lhoestq
fschlatt
https://github.com/huggingface/datasets/pull/7133
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7133", "html_url": "https://github.com/huggingface/datasets/pull/7133", "diff_url": "https://github.com/huggingface/datasets/pull/7133.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7133.patch", "merged_at": "2024-12-24T14:25...
true
2,494,510,464
7,132
Fix data file module inference
open
[ "Hi ! datasets saved using `save_to_disk` should be loaded with `load_from_disk` ;)", "It is convienient to just pass in a path to a local dataset or one from the hub and use the same function to load it. Is it not possible to get this fix merged in to allow this? ", "We can modify `save_to_disk` to write the d...
2024-08-29T13:48:16
2024-09-02T19:52:13
null
I saved a dataset with two splits to disk with `DatasetDict.save_to_disk`. The train is bigger and ended up in 10 shards, whereas the test split only resulted in 1 split. Now when trying to load the dataset, an error is raised that not all splits have the same data format: > ValueError: Couldn't infer the same da...
HennerM
https://github.com/huggingface/datasets/pull/7132
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7132", "html_url": "https://github.com/huggingface/datasets/pull/7132", "diff_url": "https://github.com/huggingface/datasets/pull/7132.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7132.patch", "merged_at": null }
true
2,491,942,650
7,129
Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output
closed
[]
2024-08-28T12:27:48
2024-12-06T11:32:02
2024-12-06T11:32:02
In the documentation for [ClassLabel](https://huggingface.co/docs/datasets/v2.21.0/en/package_reference/main_classes#datasets.ClassLabel), there is an example of usage with the following code: ```` from datasets import Features features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])}) ...
sergiopaniego
https://github.com/huggingface/datasets/issues/7129
null
false
2,490,274,775
7,128
Filter Large Dataset Entry by Entry
open
[ "Hi ! you can do\r\n\r\n```python\r\nfiltered_dataset = dataset.filter(filter_function)\r\n```\r\n\r\non a subset:\r\n\r\n```python\r\nfiltered_subset = dataset.select(range(10_000)).filter(filter_function)\r\n```\r\n", "Jumping on this as it seems relevant - when I use the `filter` method, it often results in an...
2024-08-27T20:31:09
2024-10-07T23:37:44
null
### Feature request I am not sure if this is a new feature, but I wanted to post this problem here, and hear if others have ways of optimizing and speeding up this process. Let's say I have a really large dataset that I cannot load into memory. At this point, I am only aware of `streaming=True` to load the dataset....
QiyaoWei
https://github.com/huggingface/datasets/issues/7128
null
false
2,486,524,966
7,127
Caching shuffles by np.random.Generator results in unintiutive behavior
open
[ "I first thought this was a mistake of mine, and also posted on stack overflow. https://stackoverflow.com/questions/78913797/iterating-a-huggingface-dataset-from-disk-using-generator-seems-broken-how-to-d \r\n\r\nIt seems to me the issue is the caching step in \r\n\r\nhttps://github.com/huggingface/datasets/blob/be...
2024-08-26T10:29:48
2025-07-28T11:00:00
null
### Describe the bug Create a dataset. Save it to disk. Load from disk. Shuffle, usning a `np.random.Generator`. Iterate. Shuffle again. Iterate. The iterates are different since the supplied np.random.Generator has progressed between the shuffles. Load dataset from disk again. Shuffle and Iterate. See same result ...
el-hult
https://github.com/huggingface/datasets/issues/7127
null
false
2,485,939,495
7,126
Disable implicit token in CI
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7126). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-08-26T05:29:46
2024-08-26T06:05:01
2024-08-26T05:59:15
Disable implicit token in CI. This PR allows running CI tests locally without implicitly using the local user HF token. For example, run locally the tests in: - #7124
albertvillanova
https://github.com/huggingface/datasets/pull/7126
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7126", "html_url": "https://github.com/huggingface/datasets/pull/7126", "diff_url": "https://github.com/huggingface/datasets/pull/7126.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7126.patch", "merged_at": "2024-08-26T05:59...
true
2,485,912,246
7,125
Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7125). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-08-26T05:09:35
2024-08-26T05:33:15
2024-08-26T05:27:09
Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport.
albertvillanova
https://github.com/huggingface/datasets/pull/7125
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7125", "html_url": "https://github.com/huggingface/datasets/pull/7125", "diff_url": "https://github.com/huggingface/datasets/pull/7125.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7125.patch", "merged_at": "2024-08-26T05:27...
true
2,485,890,442
7,124
Test get_dataset_config_info with non-existing/gated/private dataset
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7124). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-08-26T04:53:59
2024-08-26T06:15:33
2024-08-26T06:09:42
Test get_dataset_config_info with non-existing/gated/private dataset. Related to: - #7109 See also: - https://github.com/huggingface/dataset-viewer/pull/3037: https://github.com/huggingface/dataset-viewer/pull/3037/commits/bb1a7e00c53c242088597cab6572e4fd57797ecb
albertvillanova
https://github.com/huggingface/datasets/pull/7124
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7124", "html_url": "https://github.com/huggingface/datasets/pull/7124", "diff_url": "https://github.com/huggingface/datasets/pull/7124.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7124.patch", "merged_at": "2024-08-26T06:09...
true
2,484,003,937
7,123
Make dataset viewer more flexible in displaying metadata alongside images
open
[ "Note that you can already have one directory per subset just for the metadata, e.g.\r\n\r\n```\r\nconfigs:\r\n - config_name: subset0\r\n data_files:\r\n - subset0/metadata.csv\r\n - images/*.jpg\r\n - config_name: subset1\r\n data_files:\r\n - subset1/metadata.csv\r\n - images/*.jpg\r\...
2024-08-23T22:56:01
2024-10-17T09:13:47
null
### Feature request To display images with their associated metadata in the dataset viewer, a `metadata.csv` file is required. In the case of a dataset with multiple subsets, this would require the CSVs to be contained in the same folder as the images since they all need to be named `metadata.csv`. The request is th...
egrace479
https://github.com/huggingface/datasets/issues/7123
null
false
2,482,491,258
7,122
[interleave_dataset] sample batches from a single source at a time
open
[]
2024-08-23T07:21:15
2024-08-23T07:21:15
null
### Feature request interleave_dataset and [RandomlyCyclingMultiSourcesExamplesIterable](https://github.com/huggingface/datasets/blob/3813ce846e52824b38e53895810682f0a496a2e3/src/datasets/iterable_dataset.py#L816) enable us to sample data examples from different sources. But can we also sample batches in a similar man...
memray
https://github.com/huggingface/datasets/issues/7122
null
false
2,480,978,483
7,121
Fix typed examples iterable state dict
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7121). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-08-22T14:45:03
2024-08-22T14:54:56
2024-08-22T14:49:06
fix https://github.com/huggingface/datasets/issues/7085 as noted by @VeryLazyBoy and reported by @AjayP13
lhoestq
https://github.com/huggingface/datasets/pull/7121
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7121", "html_url": "https://github.com/huggingface/datasets/pull/7121", "diff_url": "https://github.com/huggingface/datasets/pull/7121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7121.patch", "merged_at": "2024-08-22T14:49...
true
2,480,674,237
7,120
don't mention the script if trust_remote_code=False
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7120). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Note that in this case, we could even expect this kind of message:\r\n\r\n```\r\nDataFi...
2024-08-22T12:32:32
2024-08-22T14:39:52
2024-08-22T14:33:52
See https://huggingface.co/datasets/Omega02gdfdd/bioclip-demo-zero-shot-mistakes for example. The error is: ``` FileNotFoundError: Couldn't find a dataset script at /src/services/worker/Omega02gdfdd/bioclip-demo-zero-shot-mistakes/bioclip-demo-zero-shot-mistakes.py or any data file in the same directory. Couldn't f...
severo
https://github.com/huggingface/datasets/pull/7120
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7120", "html_url": "https://github.com/huggingface/datasets/pull/7120", "diff_url": "https://github.com/huggingface/datasets/pull/7120.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7120.patch", "merged_at": "2024-08-22T14:33...
true
2,477,766,493
7,119
Install transformers with numpy-2 CI
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7119). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-08-21T11:14:59
2024-08-21T11:42:35
2024-08-21T11:36:50
Install transformers with numpy-2 CI. Note that transformers no longer pins numpy < 2 since transformers-4.43.0: - https://github.com/huggingface/transformers/pull/32018 - https://github.com/huggingface/transformers/releases/tag/v4.43.0
albertvillanova
https://github.com/huggingface/datasets/pull/7119
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7119", "html_url": "https://github.com/huggingface/datasets/pull/7119", "diff_url": "https://github.com/huggingface/datasets/pull/7119.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7119.patch", "merged_at": "2024-08-21T11:36...
true
2,477,676,893
7,118
Allow numpy-2.1 and test it without audio extra
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7118). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-08-21T10:29:35
2024-08-21T11:05:03
2024-08-21T10:58:15
Allow numpy-2.1 and test it without audio extra. This PR reverts: - #7114 Note that audio extra tests can be included again with numpy-2.1 once next numba-0.61.0 version is released.
albertvillanova
https://github.com/huggingface/datasets/pull/7118
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7118", "html_url": "https://github.com/huggingface/datasets/pull/7118", "diff_url": "https://github.com/huggingface/datasets/pull/7118.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7118.patch", "merged_at": "2024-08-21T10:58...
true
2,476,555,659
7,117
Audio dataset load everything in RAM and is very slow
open
[ "Hi ! I think the issue comes from the fact that you return `row` entirely, and therefore the dataset has to re-encode the audio data in `row`.\r\n\r\nCan you try this instead ?\r\n\r\n```python\r\n# map the dataset\r\ndef transcribe_audio(row):\r\n audio = row[\"audio\"] # get the audio but do nothing with it\...
2024-08-20T21:18:12
2024-08-26T13:11:55
null
Hello, I'm working with an audio dataset. I want to transcribe the audio that the dataset contain, and for that I use whisper. My issue is that the dataset load everything in the RAM when I map the dataset, obviously, when RAM usage is too high, the program crashes. To fix this issue, I'm using writer_batch_size tha...
Jourdelune
https://github.com/huggingface/datasets/issues/7117
null
false
2,475,522,721
7,116
datasets cannot handle nested json if features is given.
closed
[ "Hi ! `Sequence` has a weird behavior for dictionaries (from tensorflow-datasets), use a regular list instead:\r\n\r\n```python\r\nds = datasets.load_dataset('json', data_files=\"./temp.json\", features=datasets.Features({\r\n 'ref1': datasets.Value('string'),\r\n 'ref2': datasets.Value('string'),\r\n 'cut...
2024-08-20T12:27:49
2024-09-03T10:18:23
2024-09-03T10:18:07
### Describe the bug I have a json named temp.json. ```json {"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]} ``` I want to load it. ```python ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({ 'ref1': datasets.Value('string'), 'ref2': datasets.Value...
ljw20180420
https://github.com/huggingface/datasets/issues/7116
null
false
2,475,363,142
7,115
module 'pyarrow.lib' has no attribute 'ListViewType'
closed
[ "https://github.com/neurafusionai/Hugging_Face/blob/main/meta_opt_350m_customer_support_lora_v1.ipynb\r\n\r\ncouldnt train because of GPU\r\nI didnt pip install datasets -U\r\nbut looks like restarting worked" ]
2024-08-20T11:05:44
2024-09-10T06:51:08
2024-09-10T06:51:08
### Describe the bug Code: `!pipuninstall -y pyarrow !pip install --no-cache-dir pyarrow !pip uninstall -y pyarrow !pip install pyarrow --no-cache-dir !pip install --upgrade datasets transformers pyarrow !pip install pyarrow.parquet ! pip install pyarrow-core libparquet !pip install pyarrow --no-cache-di...
neurafusionai
https://github.com/huggingface/datasets/issues/7115
null
false
2,475,062,252
7,114
Temporarily pin numpy<2.1 to fix CI
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7114). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-08-20T08:42:57
2024-08-20T09:09:27
2024-08-20T09:02:35
Temporarily pin numpy<2.1 to fix CI. Fix #7111.
albertvillanova
https://github.com/huggingface/datasets/pull/7114
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7114", "html_url": "https://github.com/huggingface/datasets/pull/7114", "diff_url": "https://github.com/huggingface/datasets/pull/7114.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7114.patch", "merged_at": "2024-08-20T09:02...
true
2,475,029,640
7,113
Stream dataset does not iterate if the batch size is larger than the dataset size (related to drop_last_batch)
closed
[ "That's expected behavior, it's also the same in `torch`:\r\n\r\n```python\r\n>>> list(DataLoader(list(range(5)), batch_size=10, drop_last=True))\r\n[]\r\n```" ]
2024-08-20T08:26:40
2024-08-26T04:24:11
2024-08-26T04:24:10
### Describe the bug Hi there, I use streaming and interleaving to combine multiple datasets saved in jsonl files. The size of dataset can vary (from 100ish to 100k-ish). I use dataset.map() and a big batch size to reduce the IO cost. It was working fine with datasets-2.16.1 but this problem shows up after I upgr...
memray
https://github.com/huggingface/datasets/issues/7113
null
false
2,475,004,644
7,112
cudf-cu12 24.4.1, ibis-framework 8.0.0 requires pyarrow<15.0.0a0,>=14.0.1,pyarrow<16,>=2 and datasets 2.21.0 requires pyarrow>=15.0.0
open
[ "@sayakpaul please advice ", "Hits the same dependency conflict" ]
2024-08-20T08:13:55
2024-09-20T15:30:03
null
### Describe the bug !pip install accelerate>=0.16.0 torchvision transformers>=4.25.1 datasets>=2.19.1 ftfy tensorboard Jinja2 peft==0.7.0 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. c...
SoumyaMB10
https://github.com/huggingface/datasets/issues/7112
null
false
2,474,915,845
7,111
CI is broken for numpy-2: Failed to fetch wheel: llvmlite==0.34.0
closed
[ "Note that the CI before was using:\r\n- llvmlite: 0.43.0\r\n- numba: 0.60.0\r\n\r\nNow it tries to use:\r\n- llvmlite: 0.34.0\r\n- numba: 0.51.2", "The issue is because numba-0.60.0 pins numpy<2.1 and `uv` tries to install latest numpy-2.1.0 with an old numba-0.51.0 version (and llvmlite-0.34.0). See discussion ...
2024-08-20T07:27:28
2024-08-21T05:05:36
2024-08-20T09:02:36
Ci is broken with error `Failed to fetch wheel: llvmlite==0.34.0`: https://github.com/huggingface/datasets/actions/runs/10466825281/job/28984414269 ``` Run uv pip install --system "datasets[tests_numpy2] @ ." Resolved 150 packages in 4.42s error: Failed to prepare distributions Caused by: Failed to fetch wheel: ...
albertvillanova
https://github.com/huggingface/datasets/issues/7111
null
false
2,474,747,695
7,110
Fix ConnectionError for gated datasets and unauthenticated users
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7110). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Note that the CI error is unrelated to this PR and should be addressed in another PR. S...
2024-08-20T05:26:54
2024-08-20T15:11:35
2024-08-20T09:14:35
Fix `ConnectionError` for gated datasets and unauthenticated users. See: - https://github.com/huggingface/dataset-viewer/issues/3025 Note that a recent change in the Hub returns dataset info for gated datasets and unauthenticated users, instead of raising a `GatedRepoError` as before. See: - https://github.com/hug...
albertvillanova
https://github.com/huggingface/datasets/pull/7110
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7110", "html_url": "https://github.com/huggingface/datasets/pull/7110", "diff_url": "https://github.com/huggingface/datasets/pull/7110.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7110.patch", "merged_at": "2024-08-20T09:14...
true
2,473,367,848
7,109
ConnectionError for gated datasets and unauthenticated users
closed
[]
2024-08-19T13:27:45
2024-08-20T09:14:36
2024-08-20T09:14:35
Since the Hub returns dataset info for gated datasets and unauthenticated users, there is dead code: https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/load.py#L1846-L1852 We should remove the dead code and properly handle this case: currently we are raising a `Connect...
albertvillanova
https://github.com/huggingface/datasets/issues/7109
null
false
2,470,665,327
7,108
website broken: Create a new dataset repository, doesn't create a new repo in Firefox
closed
[ "I don't reproduce, I was able to create a new repo: https://huggingface.co/datasets/severo/reproduce-datasets-issues-7108. Can you confirm it's still broken?", "I have just tried again.\r\n\r\nFirefox: The `Create dataset` doesn't work. It has worked in the past. It's my preferred browser.\r\n\r\nChrome: The `Cr...
2024-08-16T17:23:00
2024-08-19T13:21:12
2024-08-19T06:52:48
### Describe the bug This issue is also reported here: https://discuss.huggingface.co/t/create-a-new-dataset-repository-broken-page/102644 This page is broken. https://huggingface.co/new-dataset I fill in the form with my text, and click `Create Dataset`. ![Screenshot 2024-08-16 at 15 55 37](https://github....
neoneye
https://github.com/huggingface/datasets/issues/7108
null
false
2,470,444,732
7,107
load_dataset broken in 2.21.0
closed
[ "There seems to be a PR related to the load_dataset path that went into 2.21.0 -- https://github.com/huggingface/datasets/pull/6862/files\r\n\r\nTaking a look at it now", "+1\r\n\r\nDowngrading to 2.20.0 fixed my issue, hopefully helpful for others.", "I tried adding a simple test to `test_load.py` with the alp...
2024-08-16T14:59:51
2024-08-18T09:28:43
2024-08-18T09:27:12
### Describe the bug `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)` used to work till 2.20.0 but doesn't work in 2.21.0 In 2.20.0: ![Screenshot 2024-08-16 at 3 57 10β€―PM](https://github.com/user-attachments/assets/0516489b-8187-486d-bee8-88af3381de...
anjor
https://github.com/huggingface/datasets/issues/7107
null
false
2,469,854,262
7,106
Rename LargeList.dtype to LargeList.feature
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7106). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-08-16T09:12:04
2024-08-26T04:31:59
2024-08-26T04:26:02
Rename `LargeList.dtype` to `LargeList.feature`. Note that `dtype` is usually used for NumPy data types ("int64", "float32",...): see `Value.dtype`. However, `LargeList` attribute (like `Sequence.feature`) expects a `FeatureType` instead. With this renaming: - we avoid confusion about the expected type and -...
albertvillanova
https://github.com/huggingface/datasets/pull/7106
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7106", "html_url": "https://github.com/huggingface/datasets/pull/7106", "diff_url": "https://github.com/huggingface/datasets/pull/7106.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7106.patch", "merged_at": "2024-08-26T04:26...
true
2,468,207,039
7,105
Use `huggingface_hub` cache
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7105). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Nice\r\n\r\n<img width=\"141\" alt=\"Capture d’écran 2024-08-19 aΜ€ 15 25 00\" src=\"ht...
2024-08-15T14:45:22
2024-09-12T04:36:08
2024-08-21T15:47:16
- use `hf_hub_download()` from `huggingface_hub` for HF files - `datasets` cache_dir is still used for: - caching datasets as Arrow files (that back `Dataset` objects) - extracted archives, uncompressed files - files downloaded via http (datasets with scripts) - I removed code that were made for http files (...
lhoestq
https://github.com/huggingface/datasets/pull/7105
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7105", "html_url": "https://github.com/huggingface/datasets/pull/7105", "diff_url": "https://github.com/huggingface/datasets/pull/7105.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7105.patch", "merged_at": "2024-08-21T15:47...
true
2,467,788,212
7,104
remove more script docs
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7104). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-08-15T10:13:26
2024-08-15T10:24:13
2024-08-15T10:18:25
null
lhoestq
https://github.com/huggingface/datasets/pull/7104
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7104", "html_url": "https://github.com/huggingface/datasets/pull/7104", "diff_url": "https://github.com/huggingface/datasets/pull/7104.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7104.patch", "merged_at": "2024-08-15T10:18...
true
2,467,664,581
7,103
Fix args of feature docstrings
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7103). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-08-15T08:46:08
2024-08-16T09:18:29
2024-08-15T10:33:30
Fix Args section of feature docstrings. Currently, some args do not appear in the docs because they are not properly parsed due to the lack of their type (between parentheses).
albertvillanova
https://github.com/huggingface/datasets/pull/7103
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7103", "html_url": "https://github.com/huggingface/datasets/pull/7103", "diff_url": "https://github.com/huggingface/datasets/pull/7103.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7103.patch", "merged_at": "2024-08-15T10:33...
true
2,466,893,106
7,102
Slow iteration speeds when using IterableDataset.shuffle with load_dataset(data_files=..., streaming=True)
open
[ "Hi @lajd , I was skeptical about how we are saving the shards each as their own dataset (arrow file) in the script above, and so I updated the script to try out saving the shards in a few different file formats. From the experiments I ran, I saw binary format show significantly the best performance, with arrow a...
2024-08-14T21:44:44
2024-08-15T16:17:31
null
### Describe the bug When I load a dataset from a number of arrow files, as in: ``` random_dataset = load_dataset( "arrow", data_files={split: shard_filepaths}, streaming=True, split=split, ) ``` I'm able to get fast iteration speeds when iterating over the dataset without shuffling. ...
lajd
https://github.com/huggingface/datasets/issues/7102
null
false
2,466,510,783
7,101
`load_dataset` from Hub with `name` to specify `config` using incorrect builder type when multiple data formats are present
open
[ "Having looked into this further it seems the core of the issue is with two different formats in the same repo.\r\n\r\nWhen the `parquet` config is first, the `WebDataset`s are loaded as `parquet`, if the `WebDataset` configs are first, the `parquet` is loaded as `WebDataset`.\r\n\r\nA workaround in my case would b...
2024-08-14T18:12:25
2024-08-18T10:33:38
null
Following [documentation](https://huggingface.co/docs/datasets/repository_structure#define-your-splits-and-subsets-in-yaml) I had defined different configs for [`Dataception`](https://huggingface.co/datasets/bigdata-pw/Dataception), a dataset of datasets: ```yaml configs: - config_name: dataception data_files: ...
hlky
https://github.com/huggingface/datasets/issues/7101
null
false
2,465,529,414
7,100
IterableDataset: cannot resolve features from list of numpy arrays
open
[ "Assign this issue to me under Hacktoberfest with hacktoberfest label inserted on the issue" ]
2024-08-14T11:01:51
2024-10-03T05:47:23
null
### Describe the bug when resolve features of `IterableDataset`, got `pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values` error. ``` Traceback (most recent call last): File "test.py", line 6 iter_ds = iter_ds._resolve_features() File "lib/python3.10/site-packages/datasets/iterable_dat...
VeryLazyBoy
https://github.com/huggingface/datasets/issues/7100
null
false
2,465,221,827
7,099
Set dev version
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7099). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-08-14T08:31:17
2024-08-14T08:45:17
2024-08-14T08:39:25
null
albertvillanova
https://github.com/huggingface/datasets/pull/7099
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7099", "html_url": "https://github.com/huggingface/datasets/pull/7099", "diff_url": "https://github.com/huggingface/datasets/pull/7099.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7099.patch", "merged_at": "2024-08-14T08:39...
true
2,465,016,562
7,098
Release: 2.21.0
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7098). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-08-14T06:35:13
2024-08-14T06:41:07
2024-08-14T06:41:06
null
albertvillanova
https://github.com/huggingface/datasets/pull/7098
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7098", "html_url": "https://github.com/huggingface/datasets/pull/7098", "diff_url": "https://github.com/huggingface/datasets/pull/7098.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7098.patch", "merged_at": "2024-08-14T06:41...
true
2,458,455,489
7,097
Some of DownloadConfig's properties are always being overridden in load.py
open
[]
2024-08-09T18:26:37
2024-08-09T18:26:37
null
### Describe the bug The `extract_compressed_file` and `force_extract` properties of DownloadConfig are always being set to True in the function `dataset_module_factory` in the `load.py` file. This behavior is very annoying because data extracted will just be ignored the next time the dataset is loaded. See this im...
ductai199x
https://github.com/huggingface/datasets/issues/7097
null
false
2,456,929,173
7,096
Automatically create `cache_dir` from `cache_file_name`
closed
[ "Hi @albertvillanova, is this PR looking okay to you? Anything else you'd like to see?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7096). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",...
2024-08-09T01:34:06
2024-08-15T17:25:26
2024-08-15T10:13:22
You get a pretty unhelpful error message when specifying a `cache_file_name` in a directory that doesn't exist, e.g. `cache_file_name="./cache/data.map"` ```python import datasets cache_file_name="./cache/train.map" dataset = datasets.load_dataset("ylecun/mnist") dataset["train"].map(lambda x: x, cache_file_na...
ringohoffman
https://github.com/huggingface/datasets/pull/7096
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7096", "html_url": "https://github.com/huggingface/datasets/pull/7096", "diff_url": "https://github.com/huggingface/datasets/pull/7096.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7096.patch", "merged_at": "2024-08-15T10:13...
true
2,454,418,130
7,094
Add Arabic Docs to Datasets
open
[]
2024-08-07T21:53:06
2024-08-07T21:53:06
null
Translate Docs into Arabic issue-number : #7093 [Arabic Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx) [English Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/en/index.mdx) @stevhliu
AhmedAlmaghz
https://github.com/huggingface/datasets/pull/7094
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7094", "html_url": "https://github.com/huggingface/datasets/pull/7094", "diff_url": "https://github.com/huggingface/datasets/pull/7094.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7094.patch", "merged_at": null }
true
2,454,413,074
7,093
Add Arabic Docs to datasets
open
[]
2024-08-07T21:48:05
2024-08-07T21:48:05
null
### Feature request Add Arabic Docs to datasets [Datasets Arabic](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx) ### Motivation @AhmedAlmaghz https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx ### Your contribution @AhmedAlmaghz https://github.com/AhmedAlma...
AhmedAlmaghz
https://github.com/huggingface/datasets/issues/7093
null
false
2,451,393,658
7,092
load_dataset with multiple jsonlines files interprets datastructure too early
open
[ "I’ll take a look", "Possible definitions of done for this issue:\r\n\r\n1. A fix so you can load your dataset specifically\r\n2. A general fix for datasets similar to this in the `datasets` library\r\n\r\nOption 1 is trivial. I think option 2 requires significant changes to the library.\r\n\r\nSince you outlined...
2024-08-06T17:42:55
2024-08-08T16:35:01
null
### Describe the bug likely related to #6460 using `datasets.load_dataset("json", data_dir= ... )` with multiple `.jsonl` files will error if one of the files (maybe the first file?) contains a full column of empty data. ### Steps to reproduce the bug real world example: data is available in this [PR-bra...
Vipitis
https://github.com/huggingface/datasets/issues/7092
null
false
2,449,699,490
7,090
The test test_move_script_doesnt_change_hash fails because it runs the 'python' command while the python executable has a different name
open
[]
2024-08-06T00:35:05
2024-08-06T00:35:05
null
### Describe the bug Tests should use the same pythin path as they are launched with, which in the case of FreeBSD is /usr/local/bin/python3.11 Failure: ``` if err_filename is not None: > raise child_exception_type(errno_num, err_msg, err_filename) E FileNotFo...
yurivict
https://github.com/huggingface/datasets/issues/7090
null
false
2,449,479,500
7,089
Missing pyspark dependency causes the testsuite to error out, instead of a few tests to be skipped
open
[]
2024-08-05T21:05:11
2024-08-05T21:05:11
null
### Describe the bug see the subject ### Steps to reproduce the bug regular tests ### Expected behavior n/a ### Environment info version 2.20.0
yurivict
https://github.com/huggingface/datasets/issues/7089
null
false
2,447,383,940
7,088
Disable warning when using with_format format on tensors
open
[]
2024-08-05T00:45:50
2024-08-05T00:45:50
null
### Feature request If we write this code: ```python """Get data and define datasets.""" from enum import StrEnum from datasets import load_dataset from torch.utils.data import DataLoader from torchvision import transforms class Split(StrEnum): """Describes what type of split to use in the dataloa...
Haislich
https://github.com/huggingface/datasets/issues/7088
null
false
2,447,158,643
7,087
Unable to create dataset card for Lushootseed language
closed
[ "Thanks for reporting.\r\n\r\nIt is weird, because the language entry is in the list. See: https://github.com/huggingface/huggingface.js/blob/98e32f0ed4ee057a596f66a1dec738e5db9643d5/packages/languages/src/languages_iso_639_3.ts#L15186-L15189\r\n\r\nI have reported the issue:\r\n- https://github.com/huggingface/hug...
2024-08-04T14:27:04
2024-08-06T06:59:23
2024-08-06T06:59:22
### Feature request While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering la...
vaishnavsudarshan
https://github.com/huggingface/datasets/issues/7087
null
false
2,445,516,829
7,086
load_dataset ignores cached datasets and tries to hit HF Hub, resulting in API rate limit errors
open
[ "I'm having the same issue - running into rate limits when doing hyperparameter tuning even though the dataset is supposed to be cached. I feel like this behaviour should at the very least be documented, but honestly you should just not be running into rate limits in the first place when the dataset is cached. It e...
2024-08-02T18:12:23
2025-06-16T18:43:29
null
### Describe the bug I have been running lm-eval-harness a lot which has results in an API rate limit. This seems strange, since all of the data should be cached locally. I have in fact verified this. ### Steps to reproduce the bug 1. Be Me 2. Run `load_dataset("TAUR-Lab/MuSR")` 3. Hit rate limit error 4. Dataset...
tginart
https://github.com/huggingface/datasets/issues/7086
null
false
2,440,008,618
7,085
[Regression] IterableDataset is broken on 2.20.0
closed
[ "@lhoestq I detected this regression over on [DataDreamer](https://github.com/datadreamer-dev/DataDreamer)'s test suite. I put in these [monkey patches](https://github.com/datadreamer-dev/DataDreamer/blob/4cbaf9f39cf7bedde72bbaa68346e169788fbecb/src/_patches/datasets_reset_state_hack.py) in case that fixed it our t...
2024-07-31T13:01:59
2024-08-22T14:49:37
2024-08-22T14:49:07
### Describe the bug In the latest version of datasets there is a major regression, after creating an `IterableDataset` from a generator and applying a few operations (`map`, `select`), you can no longer iterate through the dataset multiple times. The issue seems to stem from the recent addition of "resumable Itera...
AjayP13
https://github.com/huggingface/datasets/issues/7085
null
false
2,439,519,534
7,084
More easily support streaming local files
open
[]
2024-07-31T09:03:15
2024-07-31T09:05:58
null
### Feature request Simplify downloading and streaming datasets locally. Specifically, perhaps add an option to `load_dataset(..., streaming="download_first")` or add better support for streaming symlinked or arrow files. ### Motivation I have downloaded FineWeb-edu locally and currently trying to stream the d...
fschlatt
https://github.com/huggingface/datasets/issues/7084
null
false
2,439,518,466
7,083
fix streaming from arrow files
closed
[]
2024-07-31T09:02:42
2024-08-30T15:17:03
2024-08-30T15:17:03
null
fschlatt
https://github.com/huggingface/datasets/pull/7083
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7083", "html_url": "https://github.com/huggingface/datasets/pull/7083", "diff_url": "https://github.com/huggingface/datasets/pull/7083.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7083.patch", "merged_at": "2024-08-30T15:17...
true
2,437,354,975
7,082
Support HTTP authentication in non-streaming mode
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7082). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-07-30T09:25:49
2024-08-08T08:29:55
2024-08-08T08:24:06
Support HTTP authentication in non-streaming mode, by support passing HTTP storage_options in non-streaming mode. - Note that currently, HTTP authentication is supported only in streaming mode. For example, this is necessary if a remote HTTP host requires authentication to download the data.
albertvillanova
https://github.com/huggingface/datasets/pull/7082
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7082", "html_url": "https://github.com/huggingface/datasets/pull/7082", "diff_url": "https://github.com/huggingface/datasets/pull/7082.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7082.patch", "merged_at": "2024-08-08T08:24...
true
2,437,059,657
7,081
Set load_from_disk path type as PathLike
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7081). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-07-30T07:00:38
2024-07-30T08:30:37
2024-07-30T08:21:50
Set `load_from_disk` path type as `PathLike`. This way it is aligned with `save_to_disk`.
albertvillanova
https://github.com/huggingface/datasets/pull/7081
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7081", "html_url": "https://github.com/huggingface/datasets/pull/7081", "diff_url": "https://github.com/huggingface/datasets/pull/7081.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7081.patch", "merged_at": "2024-07-30T08:21...
true
2,434,275,664
7,080
Generating train split takes a long time
open
[ "@alexanderswerdlow \r\nWhen no specific split is mentioned, the load_dataset library will load all available splits of the dataset. For example, if a dataset has \"train\" and \"test\" splits, the load_dataset function will load both into the DatasetDict object.\r\n\r\n![image](https://github.com/user-attachments/...
2024-07-29T01:42:43
2024-10-02T15:31:22
null
### Describe the bug Loading a simple webdataset takes ~45 minutes. ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M") ``` ### Expected behavior The dataset should load immediately as it does when loaded through a normal indexed WebD...
alexanderswerdlow
https://github.com/huggingface/datasets/issues/7080
null
false
2,433,363,298
7,079
HfHubHTTPError: 500 Server Error: Internal Server Error for url:
closed
[ "same issue here. @albertvillanova @lhoestq ", "Also impacted by this issue in many of my datasets (though not all) - in my case, this also seems to affect datasets that have been updated recently. Git cloning and the web interface still work:\r\n- https://huggingface.co/api/datasets/acmc/cheat_reduced\r\n- https...
2024-07-27T08:21:03
2024-09-20T13:26:25
2024-07-27T19:52:30
### Describe the bug newly uploaded datasets, since yesterday, yields an error. old datasets, works fine. Seems like the datasets api server returns a 500 I'm getting the same error, when I invoke `load_dataset` with my dataset. Long discussion about it here, but I'm not sure anyone from huggingface have s...
neoneye
https://github.com/huggingface/datasets/issues/7079
null
false
2,433,270,271
7,078
Fix CI test_convert_to_parquet
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7078). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-07-27T05:32:40
2024-07-27T05:50:57
2024-07-27T05:44:32
Fix `test_convert_to_parquet` by patching `HfApi.preupload_lfs_files` and revert temporary fix: - #7074
albertvillanova
https://github.com/huggingface/datasets/pull/7078
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7078", "html_url": "https://github.com/huggingface/datasets/pull/7078", "diff_url": "https://github.com/huggingface/datasets/pull/7078.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7078.patch", "merged_at": "2024-07-27T05:44...
true
2,432,345,489
7,077
column_names ignored by load_dataset() when loading CSV file
open
[ "I confirm that `column_names` values are not copied to `names` variable because in this case `CsvConfig.__post_init__` is not called: `CsvConfig` is instantiated with default values and afterwards the `config_kwargs` are used to overwrite its attributes.\r\n\r\n@luismsgomes in the meantime, you can avoid the bug i...
2024-07-26T14:18:04
2024-07-30T07:52:26
null
### Describe the bug load_dataset() ignores the column_names kwarg when loading a CSV file. Instead, it uses whatever values are on the first line of the file. ### Steps to reproduce the bug Call `load_dataset` to load data from a CSV file and specify `column_names` kwarg. ### Expected behavior The resulting da...
luismsgomes
https://github.com/huggingface/datasets/issues/7077
null
false
2,432,275,393
7,076
πŸ§ͺ Do not mock create_commit
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7076). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-07-26T13:44:42
2024-07-27T05:48:17
2024-07-27T05:48:17
null
coyotte508
https://github.com/huggingface/datasets/pull/7076
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7076", "html_url": "https://github.com/huggingface/datasets/pull/7076", "diff_url": "https://github.com/huggingface/datasets/pull/7076.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7076.patch", "merged_at": null }
true
2,432,027,412
7,075
Update required soxr version from pre-release to release
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7075). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-07-26T11:24:35
2024-07-26T11:46:52
2024-07-26T11:40:49
Update required `soxr` version from pre-release to release 0.4.0: https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0
albertvillanova
https://github.com/huggingface/datasets/pull/7075
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7075", "html_url": "https://github.com/huggingface/datasets/pull/7075", "diff_url": "https://github.com/huggingface/datasets/pull/7075.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7075.patch", "merged_at": "2024-07-26T11:40...
true
2,431,772,703
7,074
Fix CI by temporarily marking test_convert_to_parquet as expected to fail
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7074). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-07-26T09:03:33
2024-07-26T09:23:33
2024-07-26T09:16:12
As a hotfix for CI, temporarily mark test_convert_to_parquet as expected to fail. Fix #7073. Revert once root cause is fixed.
albertvillanova
https://github.com/huggingface/datasets/pull/7074
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7074", "html_url": "https://github.com/huggingface/datasets/pull/7074", "diff_url": "https://github.com/huggingface/datasets/pull/7074.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7074.patch", "merged_at": "2024-07-26T09:16...
true
2,431,706,568
7,073
CI is broken for convert_to_parquet: Invalid rev id: refs/pr/1 404 error causes RevisionNotFoundError
closed
[ "Any recent change in the API backend rejecting parameter `revision=\"refs/pr/1\"` to `HfApi.preupload_lfs_files`?\r\n```\r\nf\"{endpoint}/api/{repo_type}s/{repo_id}/preupload/{revision}\"\r\n\r\nhttps://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5188a8-17219154347516/preupload/refs...
2024-07-26T08:27:41
2024-07-27T05:48:02
2024-07-26T09:16:13
See: https://github.com/huggingface/datasets/actions/runs/10095313567/job/27915185756 ``` FAILED tests/test_hub.py::test_convert_to_parquet - huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Root=1-66a25839-31ce7b475e70e7db1e4d44c2;b0c8870f-d5ef-4bf2-a6ff-0191f3df0f64) Revision N...
albertvillanova
https://github.com/huggingface/datasets/issues/7073
null
false
2,430,577,916
7,072
nm
closed
[]
2024-07-25T17:03:24
2024-07-25T20:36:11
2024-07-25T20:36:11
null
brettdavies
https://github.com/huggingface/datasets/issues/7072
null
false
2,430,313,011
7,071
Filter hangs
open
[]
2024-07-25T15:29:05
2024-07-25T15:36:59
null
### Describe the bug When trying to filter my custom dataset, the process hangs, regardless of the lambda function used. It appears to be an issue with the way the Images are being handled. The dataset in question is a preprocessed version of https://huggingface.co/datasets/danaaubakirova/patfig where notably, I hav...
lucienwalewski
https://github.com/huggingface/datasets/issues/7071
null
false
2,430,285,235
7,070
how set_transform affects batch size?
open
[]
2024-07-25T15:19:34
2024-07-25T15:19:34
null
### Describe the bug I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this: ``` def prepare_dataset(batch): input_features = processor(batch["audio"], sampling_rate=16000).input_feat...
VafaKnm
https://github.com/huggingface/datasets/issues/7070
null
false
2,429,281,339
7,069
Fix push_to_hub by not calling create_branch if PR branch
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7069). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "cc @Wauplin maybe it's a `huggingface_hub` bug ?\r\n\r\nEDIT: ah actually the issue is ...
2024-07-25T07:50:04
2024-07-31T07:10:07
2024-07-30T10:51:01
Fix push_to_hub by not calling create_branch if PR branch (e.g. `refs/pr/1`). Note that currently create_branch raises a 400 Bad Request error if the user passes a PR branch (e.g. `refs/pr/1`). EDIT: ~~Fix push_to_hub by not calling create_branch if branch exists.~~ Note that currently create_branch raises a ...
albertvillanova
https://github.com/huggingface/datasets/pull/7069
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7069", "html_url": "https://github.com/huggingface/datasets/pull/7069", "diff_url": "https://github.com/huggingface/datasets/pull/7069.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7069.patch", "merged_at": "2024-07-30T10:51...
true
2,426,657,434
7,068
Fix prepare_single_hop_path_and_storage_options
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7068). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2024-07-24T05:52:34
2024-07-29T07:02:07
2024-07-29T06:56:15
Fix `_prepare_single_hop_path_and_storage_options`: - Do not pass HF authentication headers and HF user-agent to non-HF HTTP URLs - Do not overwrite passed `storage_options` nested values: - Before, when passed ```DownloadConfig(storage_options={"https": {"client_kwargs": {"raise_for_status": True}}})```, ...
albertvillanova
https://github.com/huggingface/datasets/pull/7068
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7068", "html_url": "https://github.com/huggingface/datasets/pull/7068", "diff_url": "https://github.com/huggingface/datasets/pull/7068.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7068.patch", "merged_at": "2024-07-29T06:56...
true
2,425,460,168
7,067
Convert_to_parquet fails for datasets with multiple configs
closed
[ "Many users have encountered the same issue, which has caused inconvenience.\r\n\r\nhttps://discuss.huggingface.co/t/convert-to-parquet-fails-for-datasets-with-multiple-configs/86733\r\n", "Thanks for reporting.\r\n\r\nI will make the code more robust.", "I have opened an issue in the huggingface-hub repo:\r\n-...
2024-07-23T15:09:33
2024-07-30T10:51:02
2024-07-30T10:51:02
If the dataset has multiple configs, when using the `datasets-cli convert_to_parquet` command to avoid issues with the data viewer caused by loading scripts, the conversion process only successfully converts the data corresponding to the first config. When it starts converting the second config, it throws an error: ...
HuangZhen02
https://github.com/huggingface/datasets/issues/7067
null
false
2,425,125,160
7,066
One subset per file in repo ?
open
[ "Hi @lhoestq! I’ve opened a PR that addresses this issue" ]
2024-07-23T12:43:59
2025-06-26T08:24:50
null
Right now we consider all the files of a dataset to be the same data, e.g. ``` single_subset_dataset/ β”œβ”€β”€ train0.jsonl β”œβ”€β”€ train1.jsonl └── train2.jsonl ``` but in cases like this, each file is actually a different subset of the dataset and should be loaded separately ``` many_subsets_dataset/ β”œβ”€β”€ animals.jso...
lhoestq
https://github.com/huggingface/datasets/issues/7066
null
false