id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
3,121,689,436
7,595
Add `IterableDataset.push_to_hub()`
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7595). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-06-05T15:29:32
2025-06-06T16:12:37
2025-06-06T16:12:36
Basic implementation, which writes one shard per input dataset shard. This is to be improved later. Close https://github.com/huggingface/datasets/issues/5665 PS: for image/audio datasets structured as actual image/audio files (not parquet), you can sometimes speed it up with `ds.decode(num_threads=...).push_to_h...
lhoestq
https://github.com/huggingface/datasets/pull/7595
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7595", "html_url": "https://github.com/huggingface/datasets/pull/7595", "diff_url": "https://github.com/huggingface/datasets/pull/7595.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7595.patch", "merged_at": "2025-06-06T16:12...
true
3,120,799,626
7,594
Add option to ignore keys/columns when loading a dataset from jsonl(or any other data format)
open
[ "Good point, I'd be in favor of having the `columns` argument in `JsonConfig` (and the others) to align with `ParquetConfig` to let users choose which columns to load and ignore the rest", "Is it possible to ignore columns when using parquet? ", "Yes, you can pass `columns=...` to load_dataset to select which c...
2025-06-05T11:12:45
2025-06-28T09:03:00
null
### Feature request Hi, I would like the option to ignore keys/columns when loading a dataset from files (e.g. jsonl). ### Motivation I am working on a dataset which is built on jsonl. It seems the dataset is unclean and a column has different types in each row. I can't clean this or remove the column (It is not my ...
avishaiElmakies
https://github.com/huggingface/datasets/issues/7594
null
false
3,118,812,368
7,593
Fix broken link to albumentations
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7593). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@lhoestq ping" ]
2025-06-04T19:00:13
2025-06-05T16:37:02
2025-06-05T16:36:32
A few months back I rewrote all docs at [https://albumentations.ai/docs](https://albumentations.ai/docs), and some pages changed their links. In this PR fixed link to the most recent doc in Albumentations about bounding boxes and it's format. Fix a few typos in the doc as well.
ternaus
https://github.com/huggingface/datasets/pull/7593
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7593", "html_url": "https://github.com/huggingface/datasets/pull/7593", "diff_url": "https://github.com/huggingface/datasets/pull/7593.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7593.patch", "merged_at": "2025-06-05T16:36...
true
3,118,203,880
7,592
Remove scripts altogether
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7592). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hi @lhoestq,\r\nI wanted to ask\r\nare you planning to stop supporting dataset builds u...
2025-06-04T15:14:11
2025-09-04T11:03:43
2025-06-09T16:45:27
TODO: - [x] remplace fixtures based on script with no-script fixtures - [x] windaube
lhoestq
https://github.com/huggingface/datasets/pull/7592
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7592", "html_url": "https://github.com/huggingface/datasets/pull/7592", "diff_url": "https://github.com/huggingface/datasets/pull/7592.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7592.patch", "merged_at": "2025-06-09T16:45...
true
3,117,816,388
7,591
Add num_proc parameter to push_to_hub
closed
[ "Hi @SwayStar123 \n\nI'd be interested in taking this up. I plan to add a `num_proc` parameter to `push_to_hub()` and use parallel uploads for shards using `concurrent.futures`. Will explore whether `ThreadPoolExecutor` or `ProcessPoolExecutor` is more suitable based on current implementation. Let me know if that s...
2025-06-04T13:19:15
2025-09-04T10:43:33
2025-09-04T10:43:33
### Feature request A number of processes parameter to the dataset.push_to_hub method ### Motivation Shards are currently uploaded serially which makes it slow for many shards, uploading can be done in parallel and much faster
SwayStar123
https://github.com/huggingface/datasets/issues/7591
null
false
3,101,654,892
7,590
`Sequence(Features(...))` causes PyArrow cast error in `load_dataset` despite correct schema.
closed
[ "Hi @lhoestq \n\nCould you help confirm whether this qualifies as a bug?\n\nIt looks like the issue stems from how `Sequence(Features(...))` is interpreted as a plain struct during schema inference, which leads to a mismatch when casting with PyArrow (especially with nested structs inside lists). From the descripti...
2025-05-29T22:53:36
2025-07-19T22:45:08
2025-07-19T22:45:08
### Description When loading a dataset with a field declared as a list of structs using `Sequence(Features(...))`, `load_dataset` incorrectly infers the field as a plain `struct<...>` instead of a `list<struct<...>>`. This leads to the following error: ``` ArrowNotImplementedError: Unsupported cast from list<item: st...
AHS-uni
https://github.com/huggingface/datasets/issues/7590
null
false
3,101,119,704
7,589
feat: use content defined chunking
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7589). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Need to set `DEFAULT_MAX_BATCH_SIZE = 1024 * 1024`", "We should consider enabling pag...
2025-05-29T18:19:41
2025-09-09T13:45:25
2025-09-09T13:45:24
Use content defined chunking by default when writing parquet files. - [x] set the parameters in `io.parquet.ParquetDatasetReader` - [x] set the parameters in `arrow_writer.ParquetWriter` It requires a new pyarrow pin ">=21.0.0" which is released now.
kszucs
https://github.com/huggingface/datasets/pull/7589
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7589", "html_url": "https://github.com/huggingface/datasets/pull/7589", "diff_url": "https://github.com/huggingface/datasets/pull/7589.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7589.patch", "merged_at": "2025-09-09T13:45...
true
3,094,012,025
7,588
ValueError: Invalid pattern: '**' can only be an entire path component [Colab]
closed
[ "Could you please run the following code snippet in your environment and share the exact output? This will help check for any compatibility issues within the env itself. \n\n```\nimport datasets\nimport huggingface_hub\nimport fsspec\n\nprint(\"datasets version:\", datasets.__version__)\nprint(\"huggingface_hub ver...
2025-05-27T13:46:05
2025-05-30T13:22:52
2025-05-30T01:26:30
### Describe the bug I have a dataset on HF [here](https://huggingface.co/datasets/kambale/luganda-english-parallel-corpus) that i've previously used to train a translation model [here](https://huggingface.co/kambale/pearl-11m-translate). now i changed a few hyperparameters to increase number of tokens for the model,...
wkambale
https://github.com/huggingface/datasets/issues/7588
null
false
3,091,834,987
7,587
load_dataset splits typing
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7587). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-26T18:28:40
2025-05-26T18:31:10
2025-05-26T18:29:57
close https://github.com/huggingface/datasets/issues/7583
lhoestq
https://github.com/huggingface/datasets/pull/7587
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7587", "html_url": "https://github.com/huggingface/datasets/pull/7587", "diff_url": "https://github.com/huggingface/datasets/pull/7587.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7587.patch", "merged_at": "2025-05-26T18:29...
true
3,091,320,431
7,586
help is appreciated
open
[ "how is this related to this repository ?" ]
2025-05-26T14:00:42
2025-05-26T18:21:57
null
### Feature request https://github.com/rajasekarnp1/neural-audio-upscaler/tree/main ### Motivation ai model develpment and audio ### Your contribution ai model develpment and audio
rajasekarnp1
https://github.com/huggingface/datasets/issues/7586
null
false
3,091,227,921
7,585
Avoid multiple default config names
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7585). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-26T13:27:59
2025-06-05T12:41:54
2025-06-05T12:41:52
Fix duplicating default config names. Currently, when calling `push_to_hub(set_default=True` with 2 different config names, both are set as default. Moreover, this will generate an error next time we try to push another default config name, raised by `MetadataConfigs.get_default_config_name`: https://github.com...
albertvillanova
https://github.com/huggingface/datasets/pull/7585
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7585", "html_url": "https://github.com/huggingface/datasets/pull/7585", "diff_url": "https://github.com/huggingface/datasets/pull/7585.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7585.patch", "merged_at": "2025-06-05T12:41...
true
3,090,255,023
7,584
Add LMDB format support
open
[ "Hi ! Can you explain what's your use case ? Is it about converting LMDB to Dataset objects (i.e. converting to Arrow) ?" ]
2025-05-26T07:10:13
2025-05-26T18:23:37
null
### Feature request Add LMDB format support for large memory-mapping files ### Motivation Add LMDB format support for large memory-mapping files ### Your contribution I'm trying to add it
trotsky1997
https://github.com/huggingface/datasets/issues/7584
null
false
3,088,987,757
7,583
load_dataset type stubs reject List[str] for split parameter, but runtime supports it
closed
[]
2025-05-25T02:33:18
2025-05-26T18:29:58
2025-05-26T18:29:58
### Describe the bug The [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) method accepts a `List[str]` as the split parameter at runtime, however, the current type stubs restrict the split parameter to `Union[str, Split, None]`. This causes type che...
hierr
https://github.com/huggingface/datasets/issues/7583
null
false
3,083,515,643
7,582
fix: Add embed_storage in Pdf feature
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7582). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-22T14:06:29
2025-05-22T14:17:38
2025-05-22T14:17:36
Add missing `embed_storage` method in Pdf feature (Same as in Audio and Image)
AndreaFrancis
https://github.com/huggingface/datasets/pull/7582
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7582", "html_url": "https://github.com/huggingface/datasets/pull/7582", "diff_url": "https://github.com/huggingface/datasets/pull/7582.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7582.patch", "merged_at": "2025-05-22T14:17...
true
3,083,080,413
7,581
Add missing property on `RepeatExamplesIterable`
closed
[]
2025-05-22T11:41:07
2025-06-05T12:41:30
2025-06-05T12:41:29
Fixes #7561
SilvanCodes
https://github.com/huggingface/datasets/pull/7581
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7581", "html_url": "https://github.com/huggingface/datasets/pull/7581", "diff_url": "https://github.com/huggingface/datasets/pull/7581.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7581.patch", "merged_at": "2025-06-05T12:41...
true
3,082,993,027
7,580
Requesting a specific split (eg: test) still downloads all (train, test, val) data when streaming=False.
open
[ "Hi ! There was a PR open to improve this: https://github.com/huggingface/datasets/pull/6832 \nbut it hasn't been continued so far.\n\nIt would be a cool improvement though !" ]
2025-05-22T11:08:16
2025-05-26T18:40:31
null
### Describe the bug When using load_dataset() from the datasets library (in load.py), specifying a particular split (e.g., split="train") still results in downloading data for all splits when streaming=False. This happens during the builder_instance.download_and_prepare() call. This behavior leads to unnecessary band...
s3pi
https://github.com/huggingface/datasets/issues/7580
null
false
3,081,849,022
7,579
Fix typos in PDF and Video documentation
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7579). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-22T02:27:40
2025-05-22T12:53:49
2025-05-22T12:53:47
null
AndreaFrancis
https://github.com/huggingface/datasets/pull/7579
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7579", "html_url": "https://github.com/huggingface/datasets/pull/7579", "diff_url": "https://github.com/huggingface/datasets/pull/7579.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7579.patch", "merged_at": "2025-05-22T12:53...
true
3,080,833,740
7,577
arrow_schema is not compatible with list
closed
[ "Thanks for reporting, I'll look into it", "Actually it looks like you just forgot parenthesis:\n\n```diff\n- f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})\n+ f = datasets.Features({'x': list([datasets.Value(dtype='int32')])})\n```\n\nor simply using the `[ ]` syntax:\n\n```python\nf = dataset...
2025-05-21T16:37:01
2025-05-26T18:49:51
2025-05-26T18:32:55
### Describe the bug ``` import datasets f = datasets.Features({'x': list[datasets.Value(dtype='int32')]}) f.arrow_schema Traceback (most recent call last): File "datasets/features/features.py", line 1826, in arrow_schema return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)}) ...
jonathanshen-upwork
https://github.com/huggingface/datasets/issues/7577
null
false
3,080,450,538
7,576
Fix regex library warnings
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7576). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-21T14:31:58
2025-06-05T13:35:16
2025-06-05T12:37:55
# PR Summary This small PR resolves the regex library warnings showing starting Python3.11: ```python DeprecationWarning: 'count' is passed as positional argument ```
emmanuel-ferdman
https://github.com/huggingface/datasets/pull/7576
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7576", "html_url": "https://github.com/huggingface/datasets/pull/7576", "diff_url": "https://github.com/huggingface/datasets/pull/7576.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7576.patch", "merged_at": "2025-06-05T12:37...
true
3,080,228,718
7,575
[MINOR:TYPO] Update save_to_disk docstring
closed
[]
2025-05-21T13:22:24
2025-06-05T12:39:13
2025-06-05T12:39:13
r/hub/filesystem in save_to_disk
cakiki
https://github.com/huggingface/datasets/pull/7575
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7575", "html_url": "https://github.com/huggingface/datasets/pull/7575", "diff_url": "https://github.com/huggingface/datasets/pull/7575.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7575.patch", "merged_at": "2025-06-05T12:39...
true
3,079,641,072
7,574
Missing multilingual directions in IWSLT2017 dataset's processing script
open
[ "I have opened 2 PRs on the Hub: `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/7` and `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/8` to resolve this issue", "cool ! I pinged the owners of the dataset on HF to merge your PRs :)" ]
2025-05-21T09:53:17
2025-05-26T18:36:38
null
### Describe the bug Hi, Upon using `iwslt2017.py` in `IWSLT/iwslt2017` on the Hub for loading the datasets, I am unable to obtain the datasets for the language pairs `de-it`, `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` using it. These 6 pairs do not show up when using `get_dataset_config_names()` to obtain the ...
andy-joy-25
https://github.com/huggingface/datasets/issues/7574
null
false
3,076,415,382
7,573
No Samsum dataset
closed
[ "According to the following https://huggingface.co/posts/seawolf2357/424129432408590, as of now the dataset seems to be inaccessible.\n\n@IgorKasianenko, would https://huggingface.co/datasets/knkarthick/samsum suffice for your purpose?\n", "Thanks @SP1029 for the update!\nThat will work for now, using it as repla...
2025-05-20T09:54:35
2025-07-21T18:34:34
2025-06-18T12:52:23
### Describe the bug https://huggingface.co/datasets/Samsung/samsum dataset not found error 404 Originated from https://github.com/meta-llama/llama-cookbook/issues/948 ### Steps to reproduce the bug go to website https://huggingface.co/datasets/Samsung/samsum see the error also downloading it with python throws `...
IgorKasianenko
https://github.com/huggingface/datasets/issues/7573
null
false
3,074,529,251
7,572
Fixed typos
closed
[ "@lhoestq, mentioning in case you haven't seen this PR. The contribution is very small and easy to check :)" ]
2025-05-19T17:16:59
2025-06-05T12:25:42
2025-06-05T12:25:41
More info: [comment](https://github.com/huggingface/datasets/pull/7564#issuecomment-2863391781).
TopCoder2K
https://github.com/huggingface/datasets/pull/7572
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7572", "html_url": "https://github.com/huggingface/datasets/pull/7572", "diff_url": "https://github.com/huggingface/datasets/pull/7572.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7572.patch", "merged_at": "2025-06-05T12:25...
true
3,074,116,942
7,571
fix string_to_dict test
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7571). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-19T14:49:23
2025-05-19T14:52:24
2025-05-19T14:49:28
null
lhoestq
https://github.com/huggingface/datasets/pull/7571
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7571", "html_url": "https://github.com/huggingface/datasets/pull/7571", "diff_url": "https://github.com/huggingface/datasets/pull/7571.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7571.patch", "merged_at": "2025-05-19T14:49...
true
3,065,966,529
7,570
Dataset lib seems to broke after fssec lib update
closed
[ "Hi, can you try updating `datasets` ? Colab still installs `datasets` 2.x by default, instead of 3.x\n\nIt would be cool to also report this to google colab, they have a GitHub repo for this IIRC", "@lhoestq I have updated it to `datasets==3.6.0` and now there's an entirely different issue on colab while locally...
2025-05-15T11:45:06
2025-06-13T00:44:27
2025-06-13T00:44:27
### Describe the bug I am facing an issue since today where HF's dataset is acting weird and in some instances failure to recognise a valid dataset entirely, I think it is happening due to recent change in `fsspec` lib as using this command fixed it for me in one-time: `!pip install -U datasets huggingface_hub fsspec`...
sleepingcat4
https://github.com/huggingface/datasets/issues/7570
null
false
3,061,234,054
7,569
Dataset creation is broken if nesting a dict inside a dict inside a list
open
[ "Hi ! That's because Séquence is a type that comes from tensorflow datasets and inverts lists and focus when doing Séquence(dict).\n\nInstead you should use a list. In your case\n```python\nfeatures = Features({\n \"a\": [{\"b\": {\"c\": Value(\"string\")}}]\n})\n```", "Hi,\n\nThanks for the swift reply! Could...
2025-05-13T21:06:45
2025-05-20T19:25:15
null
### Describe the bug Hey, I noticed that the creation of datasets with `Dataset.from_generator` is broken if dicts and lists are nested in a certain way and a schema is being passed. See below for details. Best, Tim ### Steps to reproduce the bug Runing this code: ```python from datasets import Dataset, Features,...
TimSchneider42
https://github.com/huggingface/datasets/issues/7569
null
false
3,060,515,257
7,568
`IterableDatasetDict.map()` call removes `column_names` (in fact info.features)
open
[ "Hi ! IterableDataset doesn't know what's the output of the function you pass to map(), so it's not possible to know in advance the features of the output dataset.\n\nThere is a workaround though: either do `ds = ds.map(..., features=features)`, or you can do `ds = ds._resolve_features()` which iterates on the firs...
2025-05-13T15:45:42
2025-06-30T09:33:47
null
When calling `IterableDatasetDict.map()`, each split’s `IterableDataset.map()` is invoked without a `features` argument. While omitting the argument isn’t itself incorrect, the implementation then sets `info.features = features`, which destroys the original `features` content. Since `IterableDataset.column_names` relie...
mombip
https://github.com/huggingface/datasets/issues/7568
null
false
3,058,308,538
7,567
interleave_datasets seed with multiple workers
open
[ "Hi ! It's already the case IIRC: the effective seed looks like `seed + worker_id`. Do you have a reproducible example ?", "here is an example with shuffle\n\n```\nimport itertools\nimport datasets\nimport multiprocessing\nimport torch.utils.data\n\n\ndef gen(shard):\n worker_info = torch.utils.data.get_worker_i...
2025-05-12T22:38:27
2025-06-29T06:53:59
null
### Describe the bug Using interleave_datasets with multiple dataloader workers and a seed set causes the same dataset sampling order across all workers. Should the seed be modulated with the worker id? ### Steps to reproduce the bug See above ### Expected behavior See above ### Environment info - `datasets` ve...
jonathanasdf
https://github.com/huggingface/datasets/issues/7567
null
false
3,055,279,344
7,566
terminate called without an active exception; Aborted (core dumped)
open
[ "@alexey-milovidov I followed the code snippet, but am able to successfully execute without any error. Could you please verify if the error persists or there is any additional details.", "@alexey-milovidov else if the problem does not exist please feel free to close this issue.", "```\nmilovidov@milovidov-pc:~/...
2025-05-11T23:05:54
2025-06-23T17:56:02
null
### Describe the bug I use it as in the tutorial here: https://huggingface.co/docs/datasets/stream, and it ends up with abort. ### Steps to reproduce the bug 1. `pip install datasets` 2. ``` $ cat main.py #!/usr/bin/env python3 from datasets import load_dataset dataset = load_dataset('HuggingFaceFW/fineweb', spl...
alexey-milovidov
https://github.com/huggingface/datasets/issues/7566
null
false
3,051,731,207
7,565
add check if repo exists for dataset uploading
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7565). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@lhoestq Can you review, please? I don't think that errors in CI are related to my chan...
2025-05-09T10:27:00
2025-09-11T05:00:10
2025-09-11T05:00:09
Currently, I'm reuploading datasets for [`MTEB`](https://github.com/embeddings-benchmark/mteb/). Some of them have many splits (more than 20), and I'm encountering the error: `Too many requests for https://huggingface.co/datasets/repo/create`. It seems that this issue occurs because the dataset tries to recreate it...
Samoed
https://github.com/huggingface/datasets/pull/7565
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7565", "html_url": "https://github.com/huggingface/datasets/pull/7565", "diff_url": "https://github.com/huggingface/datasets/pull/7565.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7565.patch", "merged_at": null }
true
3,049,275,226
7,564
Implementation of iteration over values of a column in an IterableDataset object
closed
[ "A couple of questions:\r\n1. I've noticed two strange things: 1) \"Around 80% of the final dataset is made of the `en_dataset`\" in https://huggingface.co/docs/datasets/stream, 2) \"Click on \"Pull request\" to send your to the project maintainers\" in https://github.com/huggingface/datasets/blob/main/CONTRIBUTING...
2025-05-08T14:59:22
2025-05-19T12:15:02
2025-05-19T12:15:02
Refers to [this issue](https://github.com/huggingface/datasets/issues/7381).
TopCoder2K
https://github.com/huggingface/datasets/pull/7564
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7564", "html_url": "https://github.com/huggingface/datasets/pull/7564", "diff_url": "https://github.com/huggingface/datasets/pull/7564.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7564.patch", "merged_at": "2025-05-19T12:15...
true
3,046,351,253
7,563
set dev version
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7563). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-07T15:18:29
2025-05-07T15:21:05
2025-05-07T15:18:36
null
lhoestq
https://github.com/huggingface/datasets/pull/7563
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7563", "html_url": "https://github.com/huggingface/datasets/pull/7563", "diff_url": "https://github.com/huggingface/datasets/pull/7563.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7563.patch", "merged_at": "2025-05-07T15:18...
true
3,046,339,430
7,562
release: 3.6.0
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7562). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-07T15:15:13
2025-05-07T15:17:46
2025-05-07T15:15:21
null
lhoestq
https://github.com/huggingface/datasets/pull/7562
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7562", "html_url": "https://github.com/huggingface/datasets/pull/7562", "diff_url": "https://github.com/huggingface/datasets/pull/7562.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7562.patch", "merged_at": "2025-05-07T15:15...
true
3,046,302,653
7,561
NotImplementedError: <class 'datasets.iterable_dataset.RepeatExamplesIterable'> doesn't implement num_shards yet
closed
[]
2025-05-07T15:05:42
2025-06-05T12:41:30
2025-06-05T12:41:30
### Describe the bug When using `.repeat()` on an `IterableDataset`, this error gets thrown. There is [this thread](https://discuss.huggingface.co/t/making-an-infinite-iterabledataset/146192/5) that seems to imply the fix is trivial, but I don't know anything about this codebase, so I'm opening this issue rather than ...
cyanic-selkie
https://github.com/huggingface/datasets/issues/7561
null
false
3,046,265,500
7,560
fix decoding tests
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7560). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-07T14:56:14
2025-05-07T14:59:02
2025-05-07T14:56:20
null
lhoestq
https://github.com/huggingface/datasets/pull/7560
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7560", "html_url": "https://github.com/huggingface/datasets/pull/7560", "diff_url": "https://github.com/huggingface/datasets/pull/7560.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7560.patch", "merged_at": "2025-05-07T14:56...
true
3,046,177,078
7,559
fix aiohttp import
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7559). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-07T14:31:32
2025-05-07T14:34:34
2025-05-07T14:31:38
null
lhoestq
https://github.com/huggingface/datasets/pull/7559
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7559", "html_url": "https://github.com/huggingface/datasets/pull/7559", "diff_url": "https://github.com/huggingface/datasets/pull/7559.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7559.patch", "merged_at": "2025-05-07T14:31...
true
3,046,066,628
7,558
fix regression
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7558). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-07T13:56:03
2025-05-07T13:58:52
2025-05-07T13:56:18
reported in https://github.com/huggingface/datasets/pull/7557 (I just reorganized the condition) wanted to apply this change to the original PR but github didn't let me apply it directly - merging this one instead
lhoestq
https://github.com/huggingface/datasets/pull/7558
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7558", "html_url": "https://github.com/huggingface/datasets/pull/7558", "diff_url": "https://github.com/huggingface/datasets/pull/7558.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7558.patch", "merged_at": "2025-05-07T13:56...
true
3,045,962,076
7,557
check for empty _formatting
closed
[ "Thanks for reporting and for the fix ! I tried to reorganize the condition in your PR but didn't get the right permission so. I ended up merging https://github.com/huggingface/datasets/pull/7558 directly so I can make a release today - I hope you don't mind" ]
2025-05-07T13:22:37
2025-05-07T13:57:12
2025-05-07T13:57:12
Fixes a regression from #7553 breaking shuffling of iterable datasets <img width="884" alt="Screenshot 2025-05-07 at 9 16 52 AM" src="https://github.com/user-attachments/assets/d2f43c5f-4092-4efe-ac31-a32cbd025fe3" />
winglian
https://github.com/huggingface/datasets/pull/7557
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7557", "html_url": "https://github.com/huggingface/datasets/pull/7557", "diff_url": "https://github.com/huggingface/datasets/pull/7557.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7557.patch", "merged_at": null }
true
3,043,615,210
7,556
Add `--merge-pull-request` option for `convert_to_parquet`
closed
[ "This is ready for a review, happy to make any changes. The main question for maintainers is how this should interact with #7555. If my suggestion there is accepted, this PR can be kept as is. If not, more changes are required to merge all the PR parts.", "Closing since convert to parquet has been removed... http...
2025-05-06T18:05:05
2025-07-18T19:09:10
2025-07-18T19:09:10
Closes #7527 Note that this implementation **will only merge the last PR in the case that they get split up by `push_to_hub`**. See https://github.com/huggingface/datasets/discussions/7555 for more details.
klamike
https://github.com/huggingface/datasets/pull/7556
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7556", "html_url": "https://github.com/huggingface/datasets/pull/7556", "diff_url": "https://github.com/huggingface/datasets/pull/7556.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7556.patch", "merged_at": null }
true
3,043,089,844
7,554
datasets downloads and generates all splits, even though a single split is requested (for dataset with loading script)
closed
[ "Hi ! there has been some effort on allowing to download only a subset of splits in https://github.com/huggingface/datasets/pull/6832 but no one has been continuing this work so far. This would be a welcomed contribution though\n\nAlso note that loading script are often unoptimized, and we recommend using datasets ...
2025-05-06T14:43:38
2025-05-07T14:53:45
2025-05-07T14:53:44
### Describe the bug `datasets` downloads and generates all splits, even though a single split is requested. [This](https://huggingface.co/datasets/jordiae/exebench) is the dataset in question. It uses a loading script. I am not 100% sure that this is a bug, because maybe with loading scripts `datasets` must actual...
sei-eschwartz
https://github.com/huggingface/datasets/issues/7554
null
false
3,042,953,907
7,553
Rebatch arrow iterables before formatted iterable
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7553). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@lhoestq Our CI found an issue with this changeset causing a regression with shuffling ...
2025-05-06T13:59:58
2025-05-07T13:17:41
2025-05-06T14:03:42
close https://github.com/huggingface/datasets/issues/7538 and https://github.com/huggingface/datasets/issues/7475
lhoestq
https://github.com/huggingface/datasets/pull/7553
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7553", "html_url": "https://github.com/huggingface/datasets/pull/7553", "diff_url": "https://github.com/huggingface/datasets/pull/7553.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7553.patch", "merged_at": "2025-05-06T14:03...
true
3,040,258,084
7,552
Enable xet in push to hub
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7552). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-05T17:02:09
2025-05-06T12:42:51
2025-05-06T12:42:48
follows https://github.com/huggingface/huggingface_hub/pull/3035 related to https://github.com/huggingface/datasets/issues/7526
lhoestq
https://github.com/huggingface/datasets/pull/7552
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7552", "html_url": "https://github.com/huggingface/datasets/pull/7552", "diff_url": "https://github.com/huggingface/datasets/pull/7552.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7552.patch", "merged_at": "2025-05-06T12:42...
true
3,038,114,928
7,551
Issue with offline mode and partial dataset cached
open
[ "It seems the problem comes from builder.py / create_config_id()\n\nOn the first call, when the cache is empty we have\n```\nconfig_kwargs = {'data_files': {'train': ['hf://datasets/uonlp/CulturaX@6a8734bc69fefcbb7735f4f9250f43e4cd7a442e/fr/fr_part_00038.parquet']}}\n```\nleading to config_id beeing 'default-2935e8...
2025-05-04T16:49:37
2025-05-13T03:18:43
null
### Describe the bug Hi, a issue related to #4760 here when loading a single file from a dataset, unable to access it in offline mode afterwards ### Steps to reproduce the bug ```python import os # os.environ["HF_HUB_OFFLINE"] = "1" os.environ["HF_TOKEN"] = "xxxxxxxxxxxxxx" import datasets dataset_name = "uonlp/...
nrv
https://github.com/huggingface/datasets/issues/7551
null
false
3,037,017,367
7,550
disable aiohttp depend for python 3.13t free-threading compat
closed
[]
2025-05-03T00:28:18
2025-05-03T00:28:24
2025-05-03T00:28:24
null
Qubitium
https://github.com/huggingface/datasets/pull/7550
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7550", "html_url": "https://github.com/huggingface/datasets/pull/7550", "diff_url": "https://github.com/huggingface/datasets/pull/7550.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7550.patch", "merged_at": null }
true
3,036,272,015
7,549
TypeError: Couldn't cast array of type string to null on webdataset format dataset
open
[ "seems to get fixed by explicitly adding `dataset_infos.json` like this\n\n```json\n{\n \"default\": {\n \"description\": \"Image dataset with tags and ratings\",\n \"citation\": \"\",\n \"homepage\": \"\",\n \"license\": \"\",\n \"features\": {\n \"image\": {\n \"dtype\": \"image\",\n ...
2025-05-02T15:18:07
2025-05-02T15:37:05
null
### Describe the bug ```python from datasets import load_dataset dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k") ``` got ``` File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 626, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarro...
narugo1992
https://github.com/huggingface/datasets/issues/7549
null
false
3,035,568,851
7,548
Python 3.13t (free threads) Compat
open
[ "Update: `datasets` use `aiohttp` for data streaming and from what I understand data streaming is useful for large datasets that do not fit in memory and/or multi-modal datasets like image/audio where you only what the actual binary bits to fed in as needed. \n\nHowever, there are also many cases where aiohttp will...
2025-05-02T09:20:09
2025-05-12T15:11:32
null
### Describe the bug Cannot install `datasets` under `python 3.13t` due to dependency on `aiohttp` and aiohttp cannot be built for free-threading python. The `free threading` support issue in `aiothttp` is active since August 2024! Ouch. https://github.com/aio-libs/aiohttp/issues/8796#issue-2475941784 `pip install...
Qubitium
https://github.com/huggingface/datasets/issues/7548
null
false
3,034,830,291
7,547
Avoid global umask for setting file mode.
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7547). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-05-01T22:24:24
2025-05-06T13:05:00
2025-05-06T13:05:00
This PR updates the method for setting the permissions on `cache_path` after calling `shutil.move`. The call to `shutil.move` may not preserve permissions if the source and destination are on different filesystems. Reading and resetting umask can cause race conditions, so directly read what permissions were set for the...
ryan-clancy
https://github.com/huggingface/datasets/pull/7547
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7547", "html_url": "https://github.com/huggingface/datasets/pull/7547", "diff_url": "https://github.com/huggingface/datasets/pull/7547.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7547.patch", "merged_at": "2025-05-06T13:05...
true
3,034,018,298
7,546
Large memory use when loading large datasets to a ZFS pool
closed
[ "Hi ! datasets are memory mapped from disk, so they don't fill out your RAM. Not sure what's the source of your memory issue.\n\nWhat kind of system are you using ? and what kind of disk ?", "Well, the fact of the matter is that my RAM is getting filled out by running the given example, as shown in [this video](h...
2025-05-01T14:43:47
2025-05-13T13:30:09
2025-05-13T13:29:53
### Describe the bug When I load large parquet based datasets from the hub like `MLCommons/peoples_speech` using `load_dataset`, all my memory (500GB) is used and isn't released after loading, meaning that the process is terminated by the kernel if I try to load an additional dataset. This makes it impossible to train...
FredHaa
https://github.com/huggingface/datasets/issues/7546
null
false
3,031,617,547
7,545
Networked Pull Through Cache
open
[]
2025-04-30T15:16:33
2025-04-30T15:16:33
null
### Feature request Introduce a HF_DATASET_CACHE_NETWORK_LOCATION configuration (e.g. an environment variable) together with a companion network cache service. Enable a three-tier cache lookup for datasets: 1. Local on-disk cache 2. Configurable network cache proxy 3. Official Hugging Face Hub ### Motivation - Dis...
wrmedford
https://github.com/huggingface/datasets/issues/7545
null
false
3,027,024,285
7,544
Add try_original_type to DatasetDict.map
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7544). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Sure! I just committed the changes", "@lhoestq \r\nLet me know if there are other thi...
2025-04-29T04:39:44
2025-05-05T14:42:49
2025-05-05T14:42:49
This PR resolves #7472 for DatasetDict The previously merged PR #7483 added `try_original_type` to ArrowDataset, but DatasetDict misses `try_original_type` Cc: @lhoestq
yoshitomo-matsubara
https://github.com/huggingface/datasets/pull/7544
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7544", "html_url": "https://github.com/huggingface/datasets/pull/7544", "diff_url": "https://github.com/huggingface/datasets/pull/7544.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7544.patch", "merged_at": "2025-05-05T14:42...
true
3,026,867,706
7,543
The memory-disk mapping failure issue of the map function(resolved, but there are some suggestions.)
closed
[]
2025-04-29T03:04:59
2025-04-30T02:22:17
2025-04-30T02:22:17
### Describe the bug ## bug When the map function processes a large dataset, it temporarily stores the data in a cache file on the disk. After the data is stored, the memory occupied by it is released. Therefore, when using the map function to process a large-scale dataset, only a dataset space of the size of `writer_...
jxma20
https://github.com/huggingface/datasets/issues/7543
null
false
3,025,054,630
7,542
set dev version
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7542). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-04-28T14:03:48
2025-04-28T14:08:37
2025-04-28T14:04:00
null
lhoestq
https://github.com/huggingface/datasets/pull/7542
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7542", "html_url": "https://github.com/huggingface/datasets/pull/7542", "diff_url": "https://github.com/huggingface/datasets/pull/7542.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7542.patch", "merged_at": "2025-04-28T14:04...
true
3,025,045,919
7,541
release: 3.5.1
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7541). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-04-28T14:00:59
2025-04-28T14:03:38
2025-04-28T14:01:54
null
lhoestq
https://github.com/huggingface/datasets/pull/7541
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7541", "html_url": "https://github.com/huggingface/datasets/pull/7541", "diff_url": "https://github.com/huggingface/datasets/pull/7541.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7541.patch", "merged_at": "2025-04-28T14:01...
true
3,024,862,966
7,540
support pyarrow 20
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7540). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-04-28T13:01:11
2025-04-28T13:23:53
2025-04-28T13:23:52
fix ``` TypeError: ArrayExtensionArray.to_pylist() got an unexpected keyword argument 'maps_as_pydicts' ```
lhoestq
https://github.com/huggingface/datasets/pull/7540
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7540", "html_url": "https://github.com/huggingface/datasets/pull/7540", "diff_url": "https://github.com/huggingface/datasets/pull/7540.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7540.patch", "merged_at": "2025-04-28T13:23...
true
3,023,311,163
7,539
Fix IterableDataset state_dict shard_example_idx counting
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7539). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hi ! FYI I made a PR to fix https://github.com/huggingface/datasets/issues/7538 and it ...
2025-04-27T20:41:18
2025-05-06T14:24:25
2025-05-06T14:24:24
# Fix IterableDataset's state_dict shard_example_idx reporting ## Description This PR fixes issue #7475 where the `shard_example_idx` value in `IterableDataset`'s `state_dict()` always equals the number of samples in a shard, even if only a few examples have been consumed. The issue is in the `_iter_arrow` met...
Harry-Yang0518
https://github.com/huggingface/datasets/pull/7539
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7539", "html_url": "https://github.com/huggingface/datasets/pull/7539", "diff_url": "https://github.com/huggingface/datasets/pull/7539.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7539.patch", "merged_at": null }
true
3,023,280,056
7,538
`IterableDataset` drops samples when resuming from a checkpoint
closed
[ "Thanks for reporting ! I fixed the issue using RebatchedArrowExamplesIterable before the formatted iterable" ]
2025-04-27T19:34:49
2025-05-06T14:04:05
2025-05-06T14:03:42
When resuming from a checkpoint, `IterableDataset` will drop samples if `num_shards % world_size == 0` and the underlying example supports `iter_arrow` and needs to be formatted. In that case, the `FormattedExamplesIterable` fetches a batch of samples from the child iterable's `iter_arrow` and yields them one by one ...
mariosasko
https://github.com/huggingface/datasets/issues/7538
null
false
3,018,792,966
7,537
`datasets.map(..., num_proc=4)` multi-processing fails
open
[ "related: https://github.com/huggingface/datasets/issues/7510\n\nwe need to do more tests to see if latest `dill` is deterministic" ]
2025-04-25T01:53:47
2025-05-06T13:12:08
null
The following code fails in python 3.11+ ```python tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"]) ``` Error log: ```bash Traceback (most recent call last): File "/usr/local/lib/python3.12/dist-packages/multiprocess/process.py", line 315, in _bootstrap self.ru...
faaany
https://github.com/huggingface/datasets/issues/7537
null
false
3,018,425,549
7,536
[Errno 13] Permission denied: on `.incomplete` file
closed
[ "It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)", "> It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (usin...
2025-04-24T20:52:45
2025-05-06T13:05:01
2025-05-06T13:05:01
### Describe the bug When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS. It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can somet...
ryan-clancy
https://github.com/huggingface/datasets/issues/7536
null
false
3,018,289,872
7,535
Change dill version in requirements
open
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7535). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-04-24T19:44:28
2025-05-19T14:51:29
null
Change dill version to >=0.3.9,<0.4.5 and check for errors
JGrel
https://github.com/huggingface/datasets/pull/7535
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7535", "html_url": "https://github.com/huggingface/datasets/pull/7535", "diff_url": "https://github.com/huggingface/datasets/pull/7535.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7535.patch", "merged_at": null }
true
3,017,259,407
7,534
TensorFlow RaggedTensor Support (batch-level)
open
[ "Keras doesn't support other inputs other than tf.data.Dataset objects ? it's a bit painful to have to support and maintain this kind of integration\n\nIs there a way to use a `datasets.Dataset` with outputs formatted as tensors / ragged tensors instead ? like in https://huggingface.co/docs/datasets/use_with_tensor...
2025-04-24T13:14:52
2025-06-30T17:03:39
null
### Feature request Hi, Currently datasets does not support RaggedTensor output on batch-level. When building a Object Detection Dataset (with TensorFlow) I need to enable RaggedTensors as that's how BBoxes & classes are expected from the Keras Model POV. Currently there's a error thrown saying that "Nested Data is ...
Lundez
https://github.com/huggingface/datasets/issues/7534
null
false
3,015,075,086
7,533
Add custom fingerprint support to `from_generator`
open
[ "This is great !\r\n\r\nWhat do you think of passing `config_id=` directly to the builder instead of just the suffix ? This would be a power user argument though, or for internal use. And in from_generator the new argument can be `fingerprint=` as in `Dataset.__init__()`\r\n\r\nThe `config_id` can be defined using ...
2025-04-23T19:31:35
2025-09-15T19:36:34
null
This PR adds `dataset_id_suffix` parameter to 'Dataset.from_generator' function. `Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including generator function itself. `BuilderConfig.create_config_id` function tries to hash all the args, which can take a large amount ...
simonreise
https://github.com/huggingface/datasets/pull/7533
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7533", "html_url": "https://github.com/huggingface/datasets/pull/7533", "diff_url": "https://github.com/huggingface/datasets/pull/7533.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7533.patch", "merged_at": null }
true
3,009,546,204
7,532
Document the HF_DATASETS_CACHE environment variable in the datasets cache documentation
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7532). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Your clarification in your comment at https://github.com/huggingface/datasets/issues/74...
2025-04-22T00:23:13
2025-05-06T15:54:38
2025-05-06T15:54:38
This pull request updates the Datasets documentation to include the `HF_DATASETS_CACHE` environment variable. While the current documentation only mentions `HF_HOME` for overriding the default cache directory, `HF_DATASETS_CACHE` is also a supported and useful option for specifying a custom cache location for dataset...
Harry-Yang0518
https://github.com/huggingface/datasets/pull/7532
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7532", "html_url": "https://github.com/huggingface/datasets/pull/7532", "diff_url": "https://github.com/huggingface/datasets/pull/7532.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7532.patch", "merged_at": "2025-05-06T15:54...
true
3,008,914,887
7,531
Deepspeed reward training hangs at end of training with Dataset.from_list
open
[ "Hi ! How big is the dataset ? if you load it using `from_list`, the dataset lives in memory and has to be copied to every gpu process, which can be slow.\n\nIt's fasted if you load it from JSON files from disk, because in that case the dataset in converted to Arrow and loaded from disk using memory mapping. Memory...
2025-04-21T17:29:20
2025-06-29T06:20:45
null
There seems to be a weird interaction between Deepspeed, the Dataset.from_list method and trl's RewardTrainer. On a multi-GPU setup (10 A100s), training always hangs at the very end of training until it times out. The training itself works fine until the end of training and running the same script with Deepspeed on a s...
Matt00n
https://github.com/huggingface/datasets/issues/7531
null
false
3,007,452,499
7,530
How to solve "Spaces stuck in Building" problems
closed
[ "I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n", "> I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n\nAlso see https://github.com/huggingface/huggingface_hub/issues/3019", "I'm facing the same issu...
2025-04-21T03:08:38
2025-04-22T07:49:52
2025-04-22T07:49:52
### Describe the bug Public spaces may stuck in Building after restarting, error log as follows: build error Unexpected job error ERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401...
ghost
https://github.com/huggingface/datasets/issues/7530
null
false
3,007,118,969
7,529
audio folder builder cannot detect custom split name
open
[]
2025-04-20T16:53:21
2025-04-20T16:53:21
null
### Describe the bug when using audio folder builder (`load_dataset("audiofolder", data_dir="/path/to/folder")`), it cannot detect custom split name other than train/validation/test ### Steps to reproduce the bug i have the following folder structure ``` my_dataset/ ├── train/ │ ├── lorem.wav │ ├── … │ └── met...
phineas-pta
https://github.com/huggingface/datasets/issues/7529
null
false
3,006,433,485
7,528
Data Studio Error: Convert JSONL incorrectly
open
[ "Hi ! Your JSONL file is incompatible with Arrow / Parquet. Indeed in Arrow / Parquet every dict should have the same keys, while in your dataset the bboxes have varying keys.\n\nThis causes the Data Studio to treat the bboxes as if each row was missing the keys from other rows.\n\nFeel free to take a look at the d...
2025-04-19T13:21:44
2025-05-06T13:18:38
null
### Describe the bug Hi there, I uploaded a dataset here https://huggingface.co/datasets/V-STaR-Bench/V-STaR, but I found that Data Studio incorrectly convert the "bboxes" value for the whole dataset. Therefore, anyone who downloaded the dataset via the API would get the wrong "bboxes" value in the data file. Could ...
zxccade
https://github.com/huggingface/datasets/issues/7528
null
false
3,005,242,422
7,527
Auto-merge option for `convert-to-parquet`
closed
[ "Alternatively, there could be an option to switch from submitting PRs to just committing changes directly to `main`.", "Why not, I'd be in favor of `--merge-pull-request` to call `HfApi().merge_pull_request()` at the end of the conversion :) feel free to open a PR if you'd like", "#self-assign", "Closing sin...
2025-04-18T16:03:22
2025-07-18T19:09:03
2025-07-18T19:09:03
### Feature request Add a command-line option, e.g. `--auto-merge-pull-request` that enables automatic merging of the commits created by the `convert-to-parquet` tool. ### Motivation Large datasets may result in dozens of PRs due to the splitting mechanism. Each of these has to be manually accepted via the website. ...
klamike
https://github.com/huggingface/datasets/issues/7527
null
false
3,005,107,536
7,526
Faster downloads/uploads with Xet storage
open
[]
2025-04-18T14:46:42
2025-05-12T12:09:09
null
![Image](https://github.com/user-attachments/assets/6e247f4a-d436-4428-a682-fe18ebdc73a9) ## Xet is out ! Over the past few weeks, Hugging Face’s [Xet Team](https://huggingface.co/xet-team) took a major step forward by [migrating the first Model and Dataset repositories off LFS and to Xet storage](https://huggingface...
lhoestq
https://github.com/huggingface/datasets/issues/7526
null
false
3,003,032,248
7,525
Fix indexing in split commit messages
closed
[ "Hi ! this is expected and is coherent with other naming conventions in `datasets` such as parquet shards naming" ]
2025-04-17T17:06:26
2025-04-28T14:26:27
2025-04-28T14:26:27
When a large commit is split up, it seems the commit index in the message is zero-based while the total number is one-based. I came across this running `convert-to-parquet` and was wondering why there was no `6-of-6` commit. This PR fixes that by adding one to the commit index, so both are one-based. Current behavio...
klamike
https://github.com/huggingface/datasets/pull/7525
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7525", "html_url": "https://github.com/huggingface/datasets/pull/7525", "diff_url": "https://github.com/huggingface/datasets/pull/7525.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7525.patch", "merged_at": null }
true
3,002,067,826
7,524
correct use with polars example
closed
[]
2025-04-17T10:19:19
2025-04-28T13:48:34
2025-04-28T13:48:33
null
SiQube
https://github.com/huggingface/datasets/pull/7524
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7524", "html_url": "https://github.com/huggingface/datasets/pull/7524", "diff_url": "https://github.com/huggingface/datasets/pull/7524.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7524.patch", "merged_at": "2025-04-28T13:48...
true
2,999,616,692
7,523
mention av in video docs
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7523). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-04-16T13:11:12
2025-04-16T13:13:45
2025-04-16T13:11:42
null
lhoestq
https://github.com/huggingface/datasets/pull/7523
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7523", "html_url": "https://github.com/huggingface/datasets/pull/7523", "diff_url": "https://github.com/huggingface/datasets/pull/7523.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7523.patch", "merged_at": "2025-04-16T13:11...
true
2,998,169,017
7,522
Preserve formatting in concatenated IterableDataset
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7522). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-04-16T02:37:33
2025-05-19T15:07:38
2025-05-19T15:07:37
Fixes #7515
francescorubbo
https://github.com/huggingface/datasets/pull/7522
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7522", "html_url": "https://github.com/huggingface/datasets/pull/7522", "diff_url": "https://github.com/huggingface/datasets/pull/7522.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7522.patch", "merged_at": "2025-05-19T15:07...
true
2,997,666,366
7,521
fix: Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames (#7517)
closed
[ "@lhoestq let me know if you prefer to change the spark iterator so it outputs `bytes`", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7521). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ...
2025-04-15T21:23:58
2025-05-07T14:17:29
2025-05-07T14:17:29
## Task Support bytes-like objects (bytes and bytearray) in Features classes ### Description The `Features` classes only accept `bytes` objects for binary data, but not `bytearray`. This leads to errors when using `IterableDataset.from_spark()` with Spark DataFrames as they contain `bytearray` objects, even though...
giraffacarp
https://github.com/huggingface/datasets/pull/7521
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7521", "html_url": "https://github.com/huggingface/datasets/pull/7521", "diff_url": "https://github.com/huggingface/datasets/pull/7521.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7521.patch", "merged_at": "2025-05-07T14:17...
true
2,997,422,044
7,520
Update items in the dataset without `map`
open
[ "Hello!\n\nHave you looked at `Dataset.shard`? [Docs](https://huggingface.co/docs/datasets/en/process#shard)\n\nUsing this method you could break your dataset in N shards. Apply `map` on each shard and concatenate them back." ]
2025-04-15T19:39:01
2025-04-19T18:47:46
null
### Feature request I would like to be able to update items in my dataset without affecting all rows. At least if there was a range option, I would be able to process those items, save the dataset, and then continue. If I am supposed to split the dataset first, that is not clear, since the docs suggest that any of th...
mashdragon
https://github.com/huggingface/datasets/issues/7520
null
false
2,996,458,961
7,519
pdf docs fixes
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7519). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-04-15T13:35:56
2025-04-15T13:38:31
2025-04-15T13:36:03
close https://github.com/huggingface/datasets/issues/7494
lhoestq
https://github.com/huggingface/datasets/pull/7519
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7519", "html_url": "https://github.com/huggingface/datasets/pull/7519", "diff_url": "https://github.com/huggingface/datasets/pull/7519.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7519.patch", "merged_at": "2025-04-15T13:36...
true
2,996,141,825
7,518
num_proc parallelization works only for first ~10s.
open
[ "Hi, can you check if the processes are still alive ? It's a bit weird because `datasets` does check if processes crash and return an error in that case", "Thank you for reverting quickly. I digged a bit, and realized my disk's IOPS is also limited - which is causing this. will check further and report if it's an...
2025-04-15T11:44:03
2025-04-15T13:12:13
null
### Describe the bug When I try to load an already downloaded dataset with num_proc=64, the speed is very high for the first 10-20 seconds acheiving 30-40K samples / s, and 100% utilization for all cores but it soon drops to <= 1000 with almost 0% utilization for most cores. ### Steps to reproduce the bug ``` // do...
pshishodiaa
https://github.com/huggingface/datasets/issues/7518
null
false
2,996,106,077
7,517
Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames
closed
[ "Hi ! The `Image()` type accepts either\n- a `bytes` object containing the image bytes\n- a `str` object containing the image path\n- a `PIL.Image` object\n\nbut it doesn't support `bytearray`, maybe you can convert to `bytes` beforehand ?", "Hi @lhoestq, \nconverting to bytes is certainly possible and would work...
2025-04-15T11:29:17
2025-05-07T14:17:30
2025-05-07T14:17:30
### Describe the bug When using `IterableDataset.from_spark()` with a Spark DataFrame containing image data, the `Image` feature class fails to properly process this data type, causing an `AttributeError: 'bytearray' object has no attribute 'get'` ### Steps to reproduce the bug 1. Create a Spark DataFrame with a col...
giraffacarp
https://github.com/huggingface/datasets/issues/7517
null
false
2,995,780,283
7,516
unsloth/DeepSeek-R1-Distill-Qwen-32B server error
closed
[]
2025-04-15T09:26:53
2025-04-15T09:57:26
2025-04-15T09:57:26
### Describe the bug hfhubhttperror: 500 server error: internal server error for url: https://huggingface.co/api/models/unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit/commits/main (request id: root=1-67fe23fa-3a2150eb444c2a823c388579;de3aed68-c397-4da5-94d4-6565efd3b919) internal error - we're working hard to fix this ...
Editor-1
https://github.com/huggingface/datasets/issues/7516
null
false
2,995,082,418
7,515
`concatenate_datasets` does not preserve Pytorch format for IterableDataset
closed
[ "Hi ! Oh indeed it would be cool to return the same format in that case. Would you like to submit a PR ? The function that does the concatenation is here:\n\nhttps://github.com/huggingface/datasets/blob/90e5bf8a8599b625d6103ee5ac83b98269991141/src/datasets/iterable_dataset.py#L3375-L3380", "Thank you for the poin...
2025-04-15T04:36:34
2025-05-19T15:07:38
2025-05-19T15:07:38
### Describe the bug When concatenating datasets with `concatenate_datasets`, I would expect the resulting combined dataset to be in the same format as the inputs (assuming it's consistent). This is indeed the behavior when combining `Dataset`, but not when combining `IterableDataset`. Specifically, when applying `con...
francescorubbo
https://github.com/huggingface/datasets/issues/7515
null
false
2,994,714,923
7,514
Do not hash `generator` in `BuilderConfig.create_config_id`
closed
[]
2025-04-15T01:26:43
2025-04-23T11:55:55
2025-04-15T16:27:51
`Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including generator function itself. `BuilderConfig.create_config_id` function tries to hash all the args, and hashing a `generator` can take a large amount of time or even cause MemoryError if the dataset processed in a ...
simonreise
https://github.com/huggingface/datasets/pull/7514
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7514", "html_url": "https://github.com/huggingface/datasets/pull/7514", "diff_url": "https://github.com/huggingface/datasets/pull/7514.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7514.patch", "merged_at": null }
true
2,994,678,437
7,513
MemoryError while creating dataset from generator
open
[ "Upd: created a PR that can probably solve the problem: #7514", "Hi ! We need to take the generator into account for the cache. The generator is hashed to make the dataset fingerprint used by the cache. This way you can reload the Dataset from the cache without regenerating in subsequent `from_generator` calls.\n...
2025-04-15T01:02:02
2025-04-23T19:37:08
null
### Describe the bug # TL:DR `Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including `generator` function itself. `BuilderConfig.create_config_id` function tries to hash all the args, which can take a large amount of time or even cause MemoryError if the dataset pr...
simonreise
https://github.com/huggingface/datasets/issues/7513
null
false
2,994,043,544
7,512
.map() fails if function uses pyvista
open
[ "I found a similar (?) issue in https://github.com/huggingface/datasets/issues/6435, where someone had issues with forks and CUDA. According to https://huggingface.co/docs/datasets/main/en/process#multiprocessing we should do \n\n```\nfrom multiprocess import set_start_method\nset_start_method(\"spawn\")\n```\n\nto...
2025-04-14T19:43:02
2025-04-14T20:01:53
null
### Describe the bug Using PyVista inside a .map() produces a crash with `objc[78796]: +[NSResponder initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to ...
el-hult
https://github.com/huggingface/datasets/issues/7512
null
false
2,992,131,117
7,510
Incompatibile dill version (0.3.9) in datasets 2.18.0 - 3.5.0
closed
[ "Hi ! We can bump `dill` to 0.3.9 if we make sure it's deterministic and doesn't break the caching mechanism in `datasets`.\n\nWould you be interested in opening a PR ? Then we can run the CI to see if it works", "Hi!. Yeah I can do it. Should I make any changes besides dill versions?", "There are probably some...
2025-04-14T07:22:44
2025-09-15T08:37:49
2025-09-15T08:37:49
### Describe the bug Datasets 2.18.0 - 3.5.0 has a dependency on dill < 0.3.9. This causes errors with dill >= 0.3.9. Could you please take a look into it and make it compatible? ### Steps to reproduce the bug 1. Install setuptools >= 2.18.0 2. Install dill >=0.3.9 3. Run pip check 4. Output: ERROR: pip's dependenc...
JGrel
https://github.com/huggingface/datasets/issues/7510
null
false
2,991,484,542
7,509
Dataset uses excessive memory when loading files
open
[ "small update: I converted the jsons to parquet and it now works well with 32 proc and the same node. \nI still think this needs to be understood, since json is a very popular and easy-to-use format. ", "Hi ! The JSON loader loads full files in memory, unless they are JSON Lines. In this case it iterates on the J...
2025-04-13T21:09:49
2025-04-28T15:18:55
null
### Describe the bug Hi I am having an issue when loading a dataset. I have about 200 json files each about 1GB (total about 215GB). each row has a few features which are a list of ints. I am trying to load the dataset using `load_dataset`. The dataset is about 1.5M samples I use `num_proc=32` and a node with 378GB of...
avishaiElmakies
https://github.com/huggingface/datasets/issues/7509
null
false
2,986,612,934
7,508
Iterating over Image feature columns is extremely slow
open
[ "Hi ! Could it be because the `Image()` type in dataset does `image = Image.open(image_path)` and also `image.load()` which actually loads the image data in memory ? This is needed to avoid too many open files issues, see https://github.com/huggingface/datasets/issues/3985", "Yes, that seems to be it. For my pur...
2025-04-10T19:00:54
2025-04-15T17:57:08
null
We are trying to load datasets where the image column stores `PIL.PngImagePlugin.PngImageFile` images. However, iterating over these datasets is extremely slow. What I have found: 1. It is the presence of the image column that causes the slowdown. Removing the column from the dataset results in blazingly fast (as expe...
sohamparikh
https://github.com/huggingface/datasets/issues/7508
null
false
2,984,309,806
7,507
Front-end statistical data quantity deviation
open
[ "Hi ! the format of this dataset is not supported by the Dataset Viewer. It looks like this dataset was saved using `save_to_disk()` which is meant for local storage / easy reload without compression, not for sharing online." ]
2025-04-10T02:51:38
2025-04-15T12:54:51
null
### Describe the bug While browsing the dataset at https://huggingface.co/datasets/NeuML/wikipedia-20250123, I noticed that a dataset with nearly 7M entries was estimated to be only 4M in size—almost half the actual amount. According to the post-download loading and the dataset_info (https://huggingface.co/datasets/Ne...
rangehow
https://github.com/huggingface/datasets/issues/7507
null
false
2,981,687,450
7,506
HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access Fineweb-10BT on 4A100 GPUs using SLURM
open
[ "Hi ! make sure to be logged in with your HF account (e.g. using `huggingface-cli login` or passing `token=` to `load_dataset()`), otherwise you'll get rate limited at one point", "Hey @calvintanama! Just building on what @lhoestq mentioned above — I ran into similar issues in multi-GPU SLURM setups and here’s wh...
2025-04-09T06:32:04
2025-06-29T06:04:59
null
### Describe the bug I am trying to run some finetunings on 4 A100 GPUs using SLURM using axolotl training framework which in turn uses Huggingface's Trainer and Accelerate on [Fineweb-10BT](https://huggingface.co/datasets/HuggingFaceFW/fineweb), but I end up running into 429 Client Error: Too Many Requests for URL er...
calvintanama
https://github.com/huggingface/datasets/issues/7506
null
false
2,979,926,156
7,505
HfHubHTTPError: 403 Forbidden: None. Cannot access content at: https://hf.co/api/s3proxy
open
[]
2025-04-08T14:08:40
2025-04-08T14:08:40
null
I have already logged in Huggingface using CLI with my valid token. Now trying to download the datasets using following code: from transformers import WhisperProcessor, WhisperForConditionalGeneration, WhisperTokenizer, Trainer, TrainingArguments, DataCollatorForSeq2Seq from datasets import load_dataset, Data...
hissain
https://github.com/huggingface/datasets/issues/7505
null
false
2,979,410,641
7,504
BuilderConfig ParquetConfig(...) doesn't have a 'use_auth_token' key.
open
[ "I encountered the same error, have you resolved it?", "Hi ! `use_auth_token` has been deprecated and removed some time ago. You should use `token` instead in `load_dataset()`", "Hi @lhoestq, I'd like to take this up.\n\nAs discussed in #7504, the issue arises when `use_auth_token` is passed to `load_dataset`, ...
2025-04-08T10:55:03
2025-06-28T09:18:09
null
### Describe the bug Trying to run the following fine-tuning script (based on this page [here](https://github.com/huggingface/instruction-tuned-sd)): ``` ! accelerate launch /content/instruction-tuned-sd/finetune_instruct_pix2pix.py \ --pretrained_model_name_or_path=${MODEL_ID} \ --dataset_name=${DATASET_NAME...
tteguayco
https://github.com/huggingface/datasets/issues/7504
null
false
2,978,512,625
7,503
Inconsistency between load_dataset and load_from_disk functionality
open
[ "Hi ! you can find more info here: https://github.com/huggingface/datasets/issues/5044#issuecomment-1263714347\n\n> What's the recommended approach for this use case? Should I manually process my gsm8k-new dataset to make it compatible with load_dataset? Is there a standard way to convert between these formats?\n\n...
2025-04-08T03:46:22
2025-06-28T08:51:16
null
## Issue Description I've encountered confusion when using `load_dataset` and `load_from_disk` in the datasets library. Specifically, when working offline with the gsm8k dataset, I can load it using a local path: ```python import datasets ds = datasets.load_dataset('/root/xxx/datasets/gsm8k', 'main') ``` output: ```t...
zzzzzec
https://github.com/huggingface/datasets/issues/7503
null
false
2,977,453,814
7,502
`load_dataset` of size 40GB creates a cache of >720GB
closed
[ "Hi ! Parquet is a compressed format. When you load a dataset, it uncompresses the Parquet data into Arrow data on your disk. That's why you can indeed end up with 720GB of uncompressed data on disk. The uncompression is needed to enable performant dataset objects (especially for random access).\n\nTo save some sto...
2025-04-07T16:52:34
2025-04-15T15:22:12
2025-04-15T15:22:11
Hi there, I am trying to load a dataset from the Hugging Face Hub and split it into train and validation splits. Somehow, when I try to do it with `load_dataset`, it exhausts my disk quota. So, I tried manually downloading the parquet files from the hub and loading them as follows: ```python ds = DatasetDict( ...
pietrolesci
https://github.com/huggingface/datasets/issues/7502
null
false
2,976,721,014
7,501
Nested Feature raises ArrowNotImplementedError: Unsupported cast using function cast_struct
closed
[ "Solved by the default `load_dataset(features)` parameters. Do not use `Sequence` for the `list` in `list[any]` json schema, just simply use `[]`. For example, `\"b\": Sequence({...})` fails but `\"b\": [{...}]` works fine." ]
2025-04-07T12:35:39
2025-04-07T12:43:04
2025-04-07T12:43:03
### Describe the bug `datasets.Features` seems to be unable to handle json file that contains fields of `list[dict]`. ### Steps to reproduce the bug ```json // test.json {"a": 1, "b": [{"c": 2, "d": 3}, {"c": 4, "d": 5}]} {"a": 5, "b": [{"c": 7, "d": 8}, {"c": 9, "d": 10}]} ``` ```python import json from datasets i...
yaner-here
https://github.com/huggingface/datasets/issues/7501
null
false
2,974,841,921
7,500
Make `with_format` correctly indicate that a `Dataset` is compatible with PyTorch's `Dataset` class
open
[ "Does the torch `DataLoader` really require the dataset to be a subclass of `torch.utils.data.Dataset` ? Or is there a simpler type we could use ?\n\nPS: also note that a dataset without `with_format()` can also be used in a torch `DataLoader` . Calling `with_format(\"torch\")` simply makes the output of the datase...
2025-04-06T09:56:09
2025-04-15T12:57:39
null
### Feature request Currently `datasets` does not correctly indicate to the Python type-checker (e.g. `pyright` / `Pylance`) that the output of `with_format` is compatible with PyTorch's `Dataloader` since it does not indicate that the HuggingFace `Dataset` is compatible with the PyTorch `Dataset` class. It would be g...
benglewis
https://github.com/huggingface/datasets/issues/7500
null
false
2,973,489,126
7,499
Added cache dirs to load and file_utils
closed
[ "hi ! the `hf_hub_download` cache_dir is a different cache directory than the one for `datasets`.\r\n\r\n`hf_hub_download` uses the `huggingface_hub` cache which is located in by default in `~/.cache/huggingface/hub`, while `datasets` uses a different cache for Arrow files and map() results `~/.cache/huggingface/da...
2025-04-04T22:36:04
2025-05-07T14:07:34
2025-05-07T14:07:34
When adding "cache_dir" to datasets.load_dataset, the cache_dir gets lost in the function calls, changing the cache dir to the default path. This fixes a few of these instances.
gmongaras
https://github.com/huggingface/datasets/pull/7499
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7499", "html_url": "https://github.com/huggingface/datasets/pull/7499", "diff_url": "https://github.com/huggingface/datasets/pull/7499.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7499.patch", "merged_at": null }
true
2,969,218,273
7,498
Extreme memory bandwidth.
open
[]
2025-04-03T11:09:08
2025-04-03T11:11:22
null
### Describe the bug When I use hf datasets on 4 GPU with 40 workers I get some extreme memory bandwidth of constant ~3GB/s. However, if I wrap the dataset in `IterableDataset`, this issue is gone and the data also loads way faster (4x faster training on 1 worker). It seems like the workers don't share memory and b...
J0SZ
https://github.com/huggingface/datasets/issues/7498
null
false
2,968,553,693
7,497
How to convert videos to images?
open
[ "Hi ! there is some documentation here on how to read video frames: https://huggingface.co/docs/datasets/video_load" ]
2025-04-03T07:08:39
2025-04-15T12:35:15
null
### Feature request Does someone know how to return the images from videos? ### Motivation I am trying to use openpi(https://github.com/Physical-Intelligence/openpi) to finetune my Lerobot dataset(V2.0 and V2.1). I find that although the codedaset is v2.0, they are different. It seems like Lerobot V2.0 has two versi...
Loki-Lu
https://github.com/huggingface/datasets/issues/7497
null
false
2,967,345,522
7,496
Json builder: Allow features to override problematic Arrow types
open
[ "Hi ! It would be cool indeed, currently the JSON data are generally loaded here: \n\nhttps://github.com/huggingface/datasets/blob/90e5bf8a8599b625d6103ee5ac83b98269991141/src/datasets/packaged_modules/json/json.py#L137-L140\n\nMaybe we can pass a Arrow `schema` to avoid errors ?" ]
2025-04-02T19:27:16
2025-04-15T13:06:09
null
### Feature request In the JSON builder, use explicitly requested feature types before or while converting to Arrow. ### Motivation Working with JSON datasets is really hard because of Arrow. At the very least, it seems like it should be possible to work-around these problems by explicitly setting problematic colum...
edmcman
https://github.com/huggingface/datasets/issues/7496
null
false
2,967,034,060
7,495
Columns in the dataset obtained though load_dataset do not correspond to the one in the dataset viewer since 3.4.0
closed
[ "Hi, the dataset viewer shows all the possible columns and their types, but `load_dataset()` iterates through all the columns that you defined. It seems that you only have one column (‘audio’) defined in your dataset because when I ran `print(ds.column_names)`, the only name I got was “audio”. You need to clearly d...
2025-04-02T17:01:11
2025-07-02T23:24:57
2025-07-02T23:24:57
### Describe the bug I have noticed that on my dataset named [BrunoHays/Accueil_UBS](https://huggingface.co/datasets/BrunoHays/Accueil_UBS), since the version 3.4.0, every column except audio is missing when I load the dataset. Interestingly, the dataset viewer still shows the correct columns ### Steps to reproduce ...
bruno-hays
https://github.com/huggingface/datasets/issues/7495
null
false
2,965,347,685
7,494
Broken links in pdf loading documentation
closed
[ "thanks for reporting ! I fixed the links, the docs will be updated in the next release" ]
2025-04-02T06:45:22
2025-04-15T13:36:25
2025-04-15T13:36:04
### Describe the bug Hi, just a couple of small issues I ran into while reading the docs for [loading pdf data](https://huggingface.co/docs/datasets/main/en/document_load): 1. The link for the [`Create a pdf dataset`](https://huggingface.co/docs/datasets/main/en/document_load#pdffolder) points to https://huggingface....
VyoJ
https://github.com/huggingface/datasets/issues/7494
null
false
2,964,025,179
7,493
push_to_hub does not upload videos
open
[ "Hi ! the `Video` type is still experimental, and in particular `push_to_hub` doesn't upload videos at the moment (only the paths).\n\nThere is an open question to either upload the videos inside the Parquet files, or rather have them as separate files (which is great to enable remote seeking/streaming)", "im hav...
2025-04-01T17:00:20
2025-09-02T10:32:36
null
### Describe the bug Hello, I would like to upload a video dataset (some .mp4 files and some segments within them), i.e. rows correspond to subsequences from videos. Videos might be referenced by several rows. I created a dataset locally and it references the videos and the video readers can read them correctly. I u...
DominikVincent
https://github.com/huggingface/datasets/issues/7493
null
false