id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
2,010,693,912
6,451
Unable to read "marsyas/gtzan" data
closed
[ "Hi! We've merged a [PR](https://huggingface.co/datasets/marsyas/gtzan/discussions/1) that fixes the script's path logic on Windows.", "I have transferred the discussion to the corresponding dataset: https://huggingface.co/datasets/marsyas/gtzan/discussions/2\r\n\r\nLet's continue there.", "@mariosasko @albertv...
2023-11-25T15:13:17
2023-12-01T12:53:46
2023-11-27T09:36:25
Hi, this is my code and the error: ``` from datasets import load_dataset gtzan = load_dataset("marsyas/gtzan", "all") ``` [error_trace.txt](https://github.com/huggingface/datasets/files/13464397/error_trace.txt) [audio_yml.txt](https://github.com/huggingface/datasets/files/13464410/audio_yml.txt) Python 3.11.5 ...
gerald-wrona
https://github.com/huggingface/datasets/issues/6451
null
false
2,009,491,386
6,450
Support multiple image/audio columns in ImageFolder/AudioFolder
closed
[ "A duplicate of https://github.com/huggingface/datasets/issues/5760" ]
2023-11-24T10:34:09
2023-11-28T11:07:17
2023-11-24T17:24:38
### Feature request Have a metadata.csv file with multiple columns that point to relative image or audio files. ### Motivation Currently, ImageFolder allows one column, called `file_name`, pointing to relative image files. On the same model, AudioFolder allows one column, called `file_name`, pointing to relative aud...
severo
https://github.com/huggingface/datasets/issues/6450
null
false
2,008,617,992
6,449
Fix metadata file resolution when inferred pattern is `**`
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-11-23T17:35:02
2023-11-27T10:02:56
2023-11-24T17:13:02
Refetch metadata files in case they were dropped by `filter_extensions` in the previous step. Fix #6442
mariosasko
https://github.com/huggingface/datasets/pull/6449
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6449", "html_url": "https://github.com/huggingface/datasets/pull/6449", "diff_url": "https://github.com/huggingface/datasets/pull/6449.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6449.patch", "merged_at": "2023-11-24T17:13...
true
2,008,614,985
6,448
Use parquet export if possible
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-11-23T17:31:57
2023-12-01T17:57:17
2023-12-01T17:50:59
The idea is to make this code work for datasets with scripts if they have a Parquet export ```python ds = load_dataset("squad", trust_remote_code=False) ``` And more generally, it means we use the Parquet export whenever it's possible (it's safer and faster than dataset scripts). I also added a `config.USE_P...
lhoestq
https://github.com/huggingface/datasets/pull/6448
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6448", "html_url": "https://github.com/huggingface/datasets/pull/6448", "diff_url": "https://github.com/huggingface/datasets/pull/6448.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6448.patch", "merged_at": "2023-12-01T17:50...
true
2,008,195,298
6,447
Support one dataset loader per config when using YAML
open
[]
2023-11-23T13:03:07
2023-11-23T13:03:07
null
### Feature request See https://huggingface.co/datasets/datasets-examples/doc-unsupported-1 I would like to use CSV loader for the "csv" config, JSONL loader for the "jsonl" config, etc. ### Motivation It would be more flexible for the users ### Your contribution No specific contribution
severo
https://github.com/huggingface/datasets/issues/6447
null
false
2,007,092,708
6,446
Speech Commands v2 dataset doesn't match AST-v2 config
closed
[ "You can use `.align_labels_with_mapping` on the dataset to align the labels with the model config.\r\n\r\nRegarding the number of labels, only the special `_silence_` label corresponding to noise is missing, which is consistent with the model paper (reports training on 35 labels). You can run a `.filter` to drop ...
2023-11-22T20:46:36
2023-11-28T14:46:08
2023-11-28T14:46:08
### Describe the bug [According](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2) to `MIT/ast-finetuned-speech-commands-v2`, the model was trained on the Speech Commands v2 dataset. However, while the model config says the model should have 35 class labels, the dataset itself has 36 class labels. Moreover,...
vymao
https://github.com/huggingface/datasets/issues/6446
null
false
2,006,958,595
6,445
Use `filelock` package for file locking
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-11-22T19:04:45
2023-11-23T18:47:30
2023-11-23T18:41:23
Use the `filelock` package instead of `datasets.utils.filelock` for file locking to be consistent with `huggingface_hub` and not to be responsible for improving the `filelock` capabilities 🙂. (Reverts https://github.com/huggingface/datasets/pull/859, but these `INFO` logs are not printed by default (anymore?), so ...
mariosasko
https://github.com/huggingface/datasets/pull/6445
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6445", "html_url": "https://github.com/huggingface/datasets/pull/6445", "diff_url": "https://github.com/huggingface/datasets/pull/6445.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6445.patch", "merged_at": "2023-11-23T18:41...
true
2,006,842,179
6,444
Remove `Table.__getstate__` and `Table.__setstate__`
closed
[ "Thanks for working on this! The [issue](https://bugs.python.org/issue24658) with pickling objects larger than 4GB seems to be patched in Python 3.8 (the minimal supported version was 3.6 at the time of implementing this), so a simple solution would be removing the `Table.__setstate__` and `Table.__getstate__` over...
2023-11-22T17:55:10
2023-11-23T15:19:43
2023-11-23T15:13:28
When using distributed training, the code of `os.remove(filename)` may be executed separately by each rank, leading to `FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmprxxxxxxx.arrow'` ```python from torch import distributed as dist if dist.get_rank() == 0: dataset = process_dataset(*args, ...
LZHgrla
https://github.com/huggingface/datasets/pull/6444
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6444", "html_url": "https://github.com/huggingface/datasets/pull/6444", "diff_url": "https://github.com/huggingface/datasets/pull/6444.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6444.patch", "merged_at": "2023-11-23T15:13...
true
2,006,568,368
6,443
Trouble loading files defined in YAML explicitly
open
[ "There is a typo in one of the file names - `data/edf.csv` should be renamed to `data/def.csv` 🙂. ", "wow, I reviewed it twice to avoid being ashamed like that, but... I didn't notice the typo.\r\n\r\n---\r\n\r\nBesides this: do you think we would be able to improve the error message to make this clearer?", "H...
2023-11-22T15:18:10
2025-06-23T13:46:46
null
Look at https://huggingface.co/datasets/severo/doc-yaml-2 It's a reproduction of the example given in the docs at https://huggingface.co/docs/hub/datasets-manual-configuration ``` You can select multiple files per split using a list of paths: my_dataset_repository/ ├── README.md ├── data/ │ ├── abc.csv ...
severo
https://github.com/huggingface/datasets/issues/6443
null
false
2,006,086,907
6,442
Trouble loading image folder with additional features - metadata file ignored
closed
[ "I reproduced too:\r\n- root: metadata file is ignored (https://huggingface.co/datasets/severo/doc-image-3)\r\n- data/ dir: metadata file is ignored (https://huggingface.co/datasets/severo/doc-image-4)\r\n- train/ dir: works (https://huggingface.co/datasets/severo/doc-image-5)" ]
2023-11-22T11:01:35
2023-11-24T17:13:03
2023-11-24T17:13:03
### Describe the bug Loading image folder with a caption column using `load_dataset(<image_folder_path>)` doesn't load the captions. When loading a local image folder with captions using `datasets==2.13.0` ``` from datasets import load_dataset data = load_dataset(<image_folder_path>) data.column_names ``` ...
linoytsaban
https://github.com/huggingface/datasets/issues/6442
null
false
2,004,985,857
6,441
Trouble Loading a Gated Dataset For User with Granted Permission
closed
[ "> Also when they try to click the url link for the dataset they get a 404 error.\r\n\r\nThis seems to be a Hub error then (cc @SBrandeis)", "Could you report this to https://discuss.huggingface.co/c/hub/23, providing the URL of the dataset, or at least if the dataset is public or private?", "Thanks for the rep...
2023-11-21T19:24:36
2023-12-13T08:27:16
2023-12-13T08:27:16
### Describe the bug I have granted permissions to several users to access a gated huggingface dataset. The users accepted the invite and when trying to load the dataset using their access token they get `FileNotFoundError: Couldn't find a dataset script at .....` . Also when they try to click the url link for the d...
e-trop
https://github.com/huggingface/datasets/issues/6441
null
false
2,004,509,301
6,440
`.map` not hashing under python 3.9
closed
[ "Tried to upgrade Python to 3.11 - still get this message. A partial solution is to NOT use `num_proc` at all. It will be considerably longer to finish the job.", "Hi! The `model = torch.compile(model)` line is problematic for our hashing logic. We would have to merge https://github.com/huggingface/datasets/pull/...
2023-11-21T15:14:54
2023-11-28T16:29:33
2023-11-28T16:29:33
### Describe the bug The `.map` function cannot hash under python 3.9. Tried to use [the solution here](https://github.com/huggingface/datasets/issues/4521#issuecomment-1205166653), but still get the same message: `Parameter 'function'=<function map_to_pred at 0x7fa0b49ead30> of the transform datasets.arrow_data...
changyeli
https://github.com/huggingface/datasets/issues/6440
null
false
2,002,916,514
6,439
Download + preparation speed of datasets.load_dataset is 20x slower than huggingface hub snapshot and manual loding
open
[]
2023-11-20T20:07:23
2023-11-20T20:07:37
null
### Describe the bug I am working with a dataset I am trying to publish. The path is Antreas/TALI. It's a fairly large dataset, and contains images, video, audio and text. I have been having multiple problems when the dataset is being downloaded using the load_dataset function -- even with 64 workers takin...
AntreasAntoniou
https://github.com/huggingface/datasets/issues/6439
null
false
2,002,032,804
6,438
Support GeoParquet
open
[ "Thank you, @severo ! I would be more than happy to help in any way I can. I am not familiar with this repo's codebase, but I would be eager to contribute. :)\r\n\r\nFor the preview in Datasets Hub, I think it makes sense to just display the geospatial column as text. If there were a dataset loader, though, I think...
2023-11-20T11:54:58
2024-02-07T08:36:51
null
### Feature request Support the GeoParquet format ### Motivation GeoParquet (https://geoparquet.org/) is a common format for sharing vectorial geospatial data on the cloud, along with "traditional" data columns. It would be nice to be able to load this format with datasets, and more generally, in the Datasets Hub...
severo
https://github.com/huggingface/datasets/issues/6438
null
false
2,001,272,606
6,437
Problem in training iterable dataset
open
[ "Has anyone ever encountered this problem before?", "`split_dataset_by_node` doesn't give the exact same number of examples to each node in the case of iterable datasets, though it tries to be as equal as possible. In particular if your dataset is sharded and you have a number of shards that is a factor of the nu...
2023-11-20T03:04:02
2024-05-22T03:14:13
null
### Describe the bug I am using PyTorch DDP (Distributed Data Parallel) to train my model. Since the data is too large to load into memory at once, I am using load_dataset to read the data as an iterable dataset. I have used datasets.distributed.split_dataset_by_node to distribute the dataset. However, I have notice...
21Timothy
https://github.com/huggingface/datasets/issues/6437
null
false
2,000,844,474
6,436
TypeError: <lambda>() takes 0 positional arguments but 1 was given
closed
[ "This looks like a problem with your environment rather than `datasets`.", "I meet the same problem,\r\nand originally use\r\n```python\r\nlocale.getpreferredencoding = lambda : \"UTF-8\"\r\n```\r\nand change to\r\n```\r\nlocale.getpreferredencoding = lambda x: \"UTF-8\"\r\n```\r\nand it works.", "> I meet the ...
2023-11-19T13:10:20
2025-05-05T18:21:21
2023-11-29T16:28:34
### Describe the bug ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-35-7b6becee3685>](https://localhost:8080/#) in <cell line: 1>() ----> 1 from datasets import Dataset 9 frames [/usr/lo...
ahmadmustafaanis
https://github.com/huggingface/datasets/issues/6436
null
false
2,000,690,513
6,435
Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
closed
[ "[This doc section](https://huggingface.co/docs/datasets/main/en/process#multiprocessing) explains how to modify the script to avoid this error.", "@mariosasko thank you very much, i'll check it", "@mariosasko no it does not\r\n\r\n`Dataset.filter() got an unexpected keyword argument 'with_rank'`" ]
2023-11-19T04:21:16
2024-01-27T17:14:20
2023-12-04T16:57:43
### Describe the bug 1. I ran dataset mapping with `num_proc=6` in it and got this error: `RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method` I can't actually find a way to run multi-GPU dataset mapping. Can you help? ### Steps to...
kopyl
https://github.com/huggingface/datasets/issues/6435
null
false
1,999,554,915
6,434
Use `ruff` for formatting
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-11-17T16:53:22
2023-11-21T14:19:21
2023-11-21T14:13:13
Use `ruff` instead of `black` for formatting to be consistent with `transformers` ([PR](https://github.com/huggingface/transformers/pull/27144)) and `huggingface_hub` ([PR 1](https://github.com/huggingface/huggingface_hub/pull/1783) and [PR 2](https://github.com/huggingface/huggingface_hub/pull/1789)).
mariosasko
https://github.com/huggingface/datasets/pull/6434
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6434", "html_url": "https://github.com/huggingface/datasets/pull/6434", "diff_url": "https://github.com/huggingface/datasets/pull/6434.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6434.patch", "merged_at": "2023-11-21T14:13...
true
1,999,419,105
6,433
Better `tqdm` wrapper
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-11-17T15:45:15
2023-11-22T16:48:18
2023-11-22T16:42:08
This PR aligns the `tqdm` logic with `huggingface_hub` (without introducing breaking changes), as the current one is error-prone. Additionally, it improves the doc page about the `datasets`' utilities, and the handling of local `fsspec` paths in `cached_path`. Fix #6409
mariosasko
https://github.com/huggingface/datasets/pull/6433
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6433", "html_url": "https://github.com/huggingface/datasets/pull/6433", "diff_url": "https://github.com/huggingface/datasets/pull/6433.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6433.patch", "merged_at": "2023-11-22T16:42...
true
1,999,258,140
6,432
load_dataset does not load all of the data in my input file
open
[ "You should use `datasets.load_dataset` instead of `nlp.load_dataset`, as the `nlp` package is outdated.\r\n\r\nIf switching to `datasets.load_dataset` doesn't fix the issue, sharing the JSON file (feel free to replace the data with dummy data) would be nice so that we can reproduce it ourselves." ]
2023-11-17T14:28:50
2023-11-22T17:34:58
null
### Describe the bug I have 127 elements in my input dataset. When I do a len on the dataset after loaded, it is only 124 elements. ### Steps to reproduce the bug train_dataset = nlp.load_dataset(data_args.dataset_path, name=data_args.qg_format, split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset(data_...
demongolem-biz2
https://github.com/huggingface/datasets/issues/6432
null
false
1,997,202,770
6,431
Create DatasetNotFoundError and DataFilesNotFoundError
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-11-16T16:02:55
2023-11-22T15:18:51
2023-11-22T15:12:33
Create `DatasetNotFoundError` and `DataFilesNotFoundError`. Fix #6397. CC: @severo
albertvillanova
https://github.com/huggingface/datasets/pull/6431
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6431", "html_url": "https://github.com/huggingface/datasets/pull/6431", "diff_url": "https://github.com/huggingface/datasets/pull/6431.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6431.patch", "merged_at": "2023-11-22T15:12...
true
1,996,723,698
6,429
Add trust_remote_code argument
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-11-16T12:12:54
2023-11-28T16:10:39
2023-11-28T16:03:43
Draft about adding `trust_remote_code` to `load_dataset`. ```python ds = load_dataset(..., trust_remote_code=True) # run remote code (current default) ``` It would default to `True` (current behavior) and in the next major release it will prompt the user to check the code before running it (we'll communicate o...
lhoestq
https://github.com/huggingface/datasets/pull/6429
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6429", "html_url": "https://github.com/huggingface/datasets/pull/6429", "diff_url": "https://github.com/huggingface/datasets/pull/6429.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6429.patch", "merged_at": "2023-11-28T16:03...
true
1,996,306,394
6,428
Set dev version
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6428). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma...
2023-11-16T08:12:55
2023-11-16T08:19:39
2023-11-16T08:13:28
null
albertvillanova
https://github.com/huggingface/datasets/pull/6428
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6428", "html_url": "https://github.com/huggingface/datasets/pull/6428", "diff_url": "https://github.com/huggingface/datasets/pull/6428.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6428.patch", "merged_at": "2023-11-16T08:13...
true
1,996,248,605
6,427
Release: 2.15.0
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-11-16T07:37:20
2023-11-16T08:12:12
2023-11-16T07:43:05
null
albertvillanova
https://github.com/huggingface/datasets/pull/6427
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6427", "html_url": "https://github.com/huggingface/datasets/pull/6427", "diff_url": "https://github.com/huggingface/datasets/pull/6427.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6427.patch", "merged_at": "2023-11-16T07:43...
true
1,995,363,264
6,426
More robust temporary directory deletion
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6426). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma...
2023-11-15T19:06:42
2023-12-01T15:37:32
2023-12-01T15:31:19
While fixing the Windows errors in #6362, I noticed that `PermissionError` can still easily be thrown on the session exit by the temporary cache directory's finalizer (we would also have to keep track of intermediate datasets, copies, etc.). ~~Due to the low usage of `datasets` on Windows, this PR takes a simpler appro...
mariosasko
https://github.com/huggingface/datasets/pull/6426
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6426", "html_url": "https://github.com/huggingface/datasets/pull/6426", "diff_url": "https://github.com/huggingface/datasets/pull/6426.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6426.patch", "merged_at": "2023-12-01T15:31...
true
1,995,269,382
6,425
Fix deprecation warning when building conda package
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-11-15T18:00:11
2023-12-13T14:22:30
2023-12-13T14:16:00
When building/releasing conda package, we get this deprecation warning: ``` /usr/share/miniconda/envs/build-datasets/bin/conda-build:11: DeprecationWarning: conda_build.cli.main_build.main is deprecated and will be removed in 4.0.0. Use `conda build` instead. ``` This PR fixes the deprecation warning by using `co...
albertvillanova
https://github.com/huggingface/datasets/pull/6425
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6425", "html_url": "https://github.com/huggingface/datasets/pull/6425", "diff_url": "https://github.com/huggingface/datasets/pull/6425.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6425.patch", "merged_at": "2023-12-13T14:16...
true
1,995,224,516
6,424
[docs] troubleshooting guide
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6424). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma...
2023-11-15T17:28:14
2023-11-30T17:29:55
2023-11-30T17:23:46
Hi all! This is a PR adding a troubleshooting guide for Datasets docs. I went through the library's GitHub Issues and Forum questions and identified a few issues that are common enough that I think it would be valuable to include them in the troubleshooting guide. These are: - creating a dataset from a folder and n...
MKhalusova
https://github.com/huggingface/datasets/pull/6424
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6424", "html_url": "https://github.com/huggingface/datasets/pull/6424", "diff_url": "https://github.com/huggingface/datasets/pull/6424.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6424.patch", "merged_at": "2023-11-30T17:23...
true
1,994,946,847
6,423
Fix conda release by adding pyarrow-hotfix dependency
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-11-15T14:57:12
2023-11-15T17:15:33
2023-11-15T17:09:24
Fix conda release by adding pyarrow-hotfix dependency. Note that conda release failed in latest 2.14.7 release: https://github.com/huggingface/datasets/actions/runs/6874667214/job/18696761723 ``` Traceback (most recent call last): File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/t...
albertvillanova
https://github.com/huggingface/datasets/pull/6423
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6423", "html_url": "https://github.com/huggingface/datasets/pull/6423", "diff_url": "https://github.com/huggingface/datasets/pull/6423.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6423.patch", "merged_at": "2023-11-15T17:09...
true
1,994,579,267
6,422
Allow to choose the `writer_batch_size` when using `save_to_disk`
open
[ "We have a config variable that controls the batch size in `save_to_disk`:\r\n```python\r\nimport datasets\r\ndatasets.config.DEFAULT_MAX_BATCH_SIZE = <smaller_batch_size>\r\n...\r\nds.save_to_disk(...)\r\n```", "Thank you for your answer!\r\n\r\nFrom what I am reading in `https://github.com/huggingface/datasets/...
2023-11-15T11:18:34
2023-11-16T10:00:21
null
### Feature request Add an argument in `save_to_disk` regarding batch size, which would be passed to `shard` and other methods. ### Motivation The `Dataset.save_to_disk` method currently calls `shard` without passing a `writer_batch_size` argument, thus implicitly using the default value (1000). This can result in R...
NathanGodey
https://github.com/huggingface/datasets/issues/6422
null
false
1,994,451,553
6,421
Add pyarrow-hotfix to release docs
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-11-15T10:06:44
2023-11-15T13:49:55
2023-11-15T13:38:22
Add `pyarrow-hotfix` to release docs.
albertvillanova
https://github.com/huggingface/datasets/pull/6421
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6421", "html_url": "https://github.com/huggingface/datasets/pull/6421", "diff_url": "https://github.com/huggingface/datasets/pull/6421.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6421.patch", "merged_at": "2023-11-15T13:38...
true
1,994,278,903
6,420
Set dev version
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6420). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma...
2023-11-15T08:22:19
2023-11-15T08:33:36
2023-11-15T08:22:33
null
albertvillanova
https://github.com/huggingface/datasets/pull/6420
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6420", "html_url": "https://github.com/huggingface/datasets/pull/6420", "diff_url": "https://github.com/huggingface/datasets/pull/6420.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6420.patch", "merged_at": "2023-11-15T08:22...
true
1,994,257,873
6,419
Release: 2.14.7
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-11-15T08:07:37
2023-11-15T17:35:30
2023-11-15T08:12:59
Release 2.14.7.
albertvillanova
https://github.com/huggingface/datasets/pull/6419
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6419", "html_url": "https://github.com/huggingface/datasets/pull/6419", "diff_url": "https://github.com/huggingface/datasets/pull/6419.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6419.patch", "merged_at": "2023-11-15T08:12...
true
1,993,224,629
6,418
Remove token value from warnings
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-11-14T17:34:06
2023-11-14T22:26:04
2023-11-14T22:19:45
Fix #6412
mariosasko
https://github.com/huggingface/datasets/pull/6418
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6418", "html_url": "https://github.com/huggingface/datasets/pull/6418", "diff_url": "https://github.com/huggingface/datasets/pull/6418.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6418.patch", "merged_at": "2023-11-14T22:19...
true
1,993,149,416
6,417
Bug: LayoutLMv3 finetuning on FUNSD Notebook; Arrow Error
closed
[ "Very strange: `datasets-cli env`\r\n> \r\n> Copy-and-paste the text below in your GitHub issue.\r\n> \r\n> - `datasets` version: 2.9.0\r\n> - Platform: macOS-14.0-arm64-arm-64bit\r\n> - Python version: 3.9.13\r\n> - PyArrow version: 8.0.0\r\n> - Pandas version: 1.3.5\r\n\r\nAfter updating datasets and pyarrow on b...
2023-11-14T16:53:20
2023-11-16T20:23:41
2023-11-16T20:23:41
### Describe the bug Arrow issues when running the example Notebook laptop locally on Mac with M1. Works on Google Collab. **Notebook**: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb **Error**: `ValueError: Arrow type extensi...
Davo00
https://github.com/huggingface/datasets/issues/6417
null
false
1,992,954,723
6,416
Rename audio_classificiation.py to audio_classification.py
closed
[ "Oh good catch. Can you also rename it in `src/datasets/tasks/__init__.py` ?", "Fixed! \r\n\r\n(I think, tough word to spell right TBH)", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show...
2023-11-14T15:15:29
2023-11-15T11:59:32
2023-11-15T11:53:20
null
carlthome
https://github.com/huggingface/datasets/pull/6416
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6416", "html_url": "https://github.com/huggingface/datasets/pull/6416", "diff_url": "https://github.com/huggingface/datasets/pull/6416.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6416.patch", "merged_at": "2023-11-15T11:53...
true
1,992,917,248
6,415
Fix multi gpu map example
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-11-14T14:57:18
2024-01-31T00:49:15
2023-11-22T15:42:19
- use `orch.cuda.set_device` instead of `CUDA_VISIBLE_DEVICES ` - add `if __name__ == "__main__"` fix https://github.com/huggingface/datasets/issues/6186
lhoestq
https://github.com/huggingface/datasets/pull/6415
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6415", "html_url": "https://github.com/huggingface/datasets/pull/6415", "diff_url": "https://github.com/huggingface/datasets/pull/6415.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6415.patch", "merged_at": "2023-11-22T15:42...
true
1,992,482,491
6,414
Set `usedforsecurity=False` in hashlib methods (FIPS compliance)
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-11-14T10:47:09
2023-11-17T14:23:20
2023-11-17T14:17:00
Related to https://github.com/huggingface/transformers/issues/27034 and https://github.com/huggingface/huggingface_hub/pull/1782. **TL;DR:** `hashlib` is not a secure library for cryptography-related stuff. We are only using `hashlib` for non-security-related purposes in `datasets` so it's fine. From Python 3.9 we s...
Wauplin
https://github.com/huggingface/datasets/pull/6414
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6414", "html_url": "https://github.com/huggingface/datasets/pull/6414", "diff_url": "https://github.com/huggingface/datasets/pull/6414.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6414.patch", "merged_at": "2023-11-17T14:17...
true
1,992,401,594
6,412
User token is printed out!
closed
[ "Indeed, this is not a good practice. I've opened a PR that removes the token value from the (deprecation) warning." ]
2023-11-14T10:01:34
2023-11-14T22:19:46
2023-11-14T22:19:46
This line prints user token on command line! Is it safe? https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/load.py#L2091
mohsen-goodarzi
https://github.com/huggingface/datasets/issues/6412
null
false
1,992,386,630
6,411
Fix dependency conflict within CI build documentation
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2023-11-14T09:52:51
2023-11-14T10:05:59
2023-11-14T10:05:35
Manually fix dependency conflict on `typing-extensions` version originated by `apache-beam` + `pydantic` (now a dependency of `huggingface-hub`). This is a temporary hot fix of our CI build documentation until we stop using `apache-beam`. Fix #6406.
albertvillanova
https://github.com/huggingface/datasets/pull/6411
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6411", "html_url": "https://github.com/huggingface/datasets/pull/6411", "diff_url": "https://github.com/huggingface/datasets/pull/6411.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6411.patch", "merged_at": "2023-11-14T10:05...
true
1,992,100,209
6,410
Datasets does not load HuggingFace Repository properly
open
[ "Hi! You can avoid the error by requesting only the `jsonl` files. `dataset = load_dataset(\"ai4privacy/pii-masking-200k\", data_files=[\"*.jsonl\"])`.\r\n\r\nOur data file inference does not filter out (incompatible) `json` files because `json` and `jsonl` use the same builder. Still, I think the inference should...
2023-11-14T06:50:49
2023-11-16T06:54:36
null
### Describe the bug Dear Datasets team, We just have published a dataset on Huggingface: https://huggingface.co/ai4privacy However, when trying to read it using the Dataset library we get an error. As I understand jsonl files are compatible, could you please clarify how we can solve the issue? Please let me ...
MikeDoes
https://github.com/huggingface/datasets/issues/6410
null
false
1,991,960,865
6,409
using DownloadManager to download from local filesystem and disable_progress_bar, there will be an exception
closed
[]
2023-11-14T04:21:01
2023-11-22T16:42:09
2023-11-22T16:42:09
### Describe the bug i'm using datasets.download.download_manager.DownloadManager to download files like "file:///a/b/c.txt", and i disable_progress_bar() to disable bar. there will be an exception as follows: `AttributeError: 'function' object has no attribute 'close' Exception ignored in: <function TqdmCallback....
neiblegy
https://github.com/huggingface/datasets/issues/6409
null
false
1,991,902,972
6,408
`IterableDataset` lost but not keep columns when map function adding columns with names in `remove_columns`
open
[]
2023-11-14T03:12:08
2023-11-16T06:24:10
null
### Describe the bug IterableDataset lost but not keep columns when map function adding columns with names in remove_columns, Dataset not. May be related to the code below: https://github.com/huggingface/datasets/blob/06c3ffb8d068b6307b247164b10f7c7311cefed4/src/datasets/iterable_dataset.py#L750-L756 ### Steps t...
shmily326
https://github.com/huggingface/datasets/issues/6408
null
false
1,991,514,079
6,407
Loading the dataset from private S3 bucket gives "TypeError: cannot pickle '_contextvars.Context' object"
open
[ "I have encountered the same problem with `datasets-2.20.0`. \r\n\r\nI found the following workaround for this issue (including the fix from #6598):\r\n1. specify the AWS profile name in the `storage_options` instead of passing an existing session object\r\n2. use a custom `DownloadConfig` object to fix #6598\r\n3....
2023-11-13T21:27:43
2024-07-30T12:35:09
null
### Describe the bug I'm trying to read the parquet file from the private s3 bucket using the `load_dataset` function, but I receive `TypeError: cannot pickle '_contextvars.Context' object` error I'm working on a machine with `~/.aws/credentials` file. I can't give credentials and the path to a file in a private bu...
eawer
https://github.com/huggingface/datasets/issues/6407
null
false
1,990,469,045
6,406
CI Build PR Documentation is broken: ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'
closed
[]
2023-11-13T11:36:10
2023-11-14T10:05:36
2023-11-14T10:05:36
Our CI Build PR Documentation is broken. See: https://github.com/huggingface/datasets/actions/runs/6799554060/job/18486828777?pr=6390 ``` ImportError: cannot import name 'TypeAliasType' from 'typing_extensions' ```
albertvillanova
https://github.com/huggingface/datasets/issues/6406
null
false
1,990,358,743
6,405
ConfigNamesError on a simple CSV file
closed
[ "The viewer is working now. \r\n\r\nBased on the repo commit history, the bug was due to the incorrect format of the `features` field in the README YAML (`Value` requires `dtype`, e.g., `Value(\"string\")`, but it was not specified)", "Feel free to close the issue", "Oh, OK! Thanks. So, there was no reason to o...
2023-11-13T10:28:29
2023-11-13T20:01:24
2023-11-13T20:01:24
See https://huggingface.co/datasets/Nguyendo1999/mmath/discussions/1 ``` Error code: ConfigNamesError Exception: TypeError Message: __init__() missing 1 required positional argument: 'dtype' Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runn...
severo
https://github.com/huggingface/datasets/issues/6405
null
false
1,990,211,901
6,404
Support pyarrow 14.0.1 and fix vulnerability CVE-2023-47248
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-11-13T09:15:39
2023-11-14T10:29:48
2023-11-14T10:23:29
Support `pyarrow` 14.0.1 and fix vulnerability [CVE-2023-47248](https://github.com/advisories/GHSA-5wvp-7f3h-6wmm). Fix #6396.
albertvillanova
https://github.com/huggingface/datasets/pull/6404
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6404", "html_url": "https://github.com/huggingface/datasets/pull/6404", "diff_url": "https://github.com/huggingface/datasets/pull/6404.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6404.patch", "merged_at": "2023-11-14T10:23...
true
1,990,098,817
6,403
Cannot import datasets on google colab (python 3.10.12)
closed
[ "You are most likely using an outdated version of `datasets` in the notebook, which can be verified with the `!datasets-cli env` command. You can run `!pip install -U datasets` to update the installation.", "okay, it works! thank you so much! 😄 " ]
2023-11-13T08:14:43
2023-11-16T05:04:22
2023-11-16T05:04:21
### Describe the bug I'm trying A full colab demo notebook of zero-shot-distillation from https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation but i got this type of error when importing datasets on my google colab (python version is 3.10.12) ![image](https://gith...
nabilaannisa
https://github.com/huggingface/datasets/issues/6403
null
false
1,989,094,542
6,402
Update torch_formatter.py
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6402). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>...
2023-11-11T19:40:41
2024-03-15T11:31:53
2024-03-15T11:25:37
Ensure PyTorch images are converted to (C, H, W) instead of (H, W, C). See #6394 for motivation.
varunneal
https://github.com/huggingface/datasets/pull/6402
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6402", "html_url": "https://github.com/huggingface/datasets/pull/6402", "diff_url": "https://github.com/huggingface/datasets/pull/6402.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6402.patch", "merged_at": "2024-03-15T11:25...
true
1,988,710,061
6,401
dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text") not working
closed
[ "Seems like it's a problem with the dataset, since in the [README](https://huggingface.co/datasets/Hyperspace-Technologies/scp-wiki-text/blob/main/README.md) the validation is not specified. Try cloning the dataset, removing the README (or validation split), and loading it locally/ ", "@VarunNSrivastava thanks br...
2023-11-11T04:09:07
2023-11-20T17:45:20
2023-11-20T17:45:20
### Describe the bug ``` (datasets) mruserbox@guru-X99:/media/10TB_HHD/_LLM_DATASETS$ python dataset.py Downloading readme: 100%|███████████████████████████████████| 360/360 [00:00<00:00, 2.16MB/s] Downloading data: 100%|█████████████████████████████████| 65.1M/65.1M [00:19<00:00, 3.38MB/s] Downloading data: 100...
userbox020
https://github.com/huggingface/datasets/issues/6401
null
false
1,988,571,317
6,400
Safely load datasets by disabling execution of dataset loading script
closed
[ "great idea IMO\r\n\r\nthis could be a `trust_remote_code=True` flag like in transformers. We could also default to loading the Parquet conversion rather than executing code (for dataset repos that have both)", "@julien-c that would be great!", "We added the `trust_remote_code` argument to `load_dataset()` in `...
2023-11-10T23:48:29
2024-06-13T15:56:13
2024-06-13T15:56:13
### Feature request Is there a way to disable execution of dataset loading script using `load_dataset`? This is a security vulnerability that could lead to arbitrary code execution. Any suggested workarounds are welcome as well. ### Motivation This is a security vulnerability that could lead to arbitrary code e...
irenedea
https://github.com/huggingface/datasets/issues/6400
null
false
1,988,368,503
6,399
TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array
open
[ "Seconding encountering this issue." ]
2023-11-10T20:48:46
2024-06-22T00:13:48
null
### Describe the bug Hi, I am preprocessing a large custom dataset with numpy arrays. I am running into this TypeError during writing in a dataset.map() function. I've tried decreasing writer batch size, but this error persists. This error does not occur for smaller datasets. Thank you! ### Steps to repro...
y-hwang
https://github.com/huggingface/datasets/issues/6399
null
false
1,987,786,446
6,398
Remove redundant condition in builders
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-11-10T14:56:43
2023-11-14T10:49:15
2023-11-14T10:43:00
Minor refactoring to remove redundant condition.
albertvillanova
https://github.com/huggingface/datasets/pull/6398
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6398", "html_url": "https://github.com/huggingface/datasets/pull/6398", "diff_url": "https://github.com/huggingface/datasets/pull/6398.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6398.patch", "merged_at": "2023-11-14T10:43...
true
1,987,622,152
6,397
Raise a different exception for inexisting dataset vs files without known extension
closed
[]
2023-11-10T13:22:14
2023-11-22T15:12:34
2023-11-22T15:12:34
See https://github.com/huggingface/datasets-server/issues/2082#issuecomment-1805716557 We have the same error for: - https://huggingface.co/datasets/severo/a_dataset_that_does_not_exist: a dataset that does not exist - https://huggingface.co/datasets/severo/test_files_without_extension: a dataset with files withou...
severo
https://github.com/huggingface/datasets/issues/6397
null
false
1,987,308,077
6,396
Issue with pyarrow 14.0.1
closed
[ "Looks like we should stop using `PyExtensionType` and use `ExtensionType` instead\r\n\r\nsee https://github.com/apache/arrow/commit/f14170976372436ec1d03a724d8d3f3925484ecf", "https://github.com/huggingface/datasets-server/pull/2089#pullrequestreview-1724449532\r\n\r\n> Yes, I understand now: they have disabled ...
2023-11-10T10:02:12
2025-08-19T18:13:30
2023-11-14T10:23:30
See https://github.com/huggingface/datasets-server/pull/2089 for reference ``` from datasets import (Array2D, Dataset, Features) feature_type = Array2D(shape=(2, 2), dtype="float32") content = [[0.0, 0.0], [0.0, 0.0]] features = Features({"col": feature_type}) dataset = Dataset.from_dict({"col": [content]}, fea...
severo
https://github.com/huggingface/datasets/issues/6396
null
false
1,986,484,124
6,395
Add ability to set lock type
closed
[ "We've replaced our filelock implementation with the `filelock` package, so their repo is the right place to request this feature.\r\n\r\nIn the meantime, the following should work: \r\n```python\r\nimport filelock\r\nfilelock.FileLock = filelock.SoftFileLock\r\n\r\nimport datasets\r\n...\r\n```" ]
2023-11-09T22:12:30
2023-11-23T18:50:00
2023-11-23T18:50:00
### Feature request Allow setting file lock type, maybe from an environment variable Currently, it only depends on whether fnctl is available: https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/utils/filelock.py#L463-L470C16 ### Motivation In my environment...
leoleoasd
https://github.com/huggingface/datasets/issues/6395
null
false
1,985,947,116
6,394
TorchFormatter images (H, W, C) instead of (C, H, W) format
closed
[ "Here's a PR for that. https://github.com/huggingface/datasets/pull/6402\r\n\r\nIt's not backward compatible, unfortunately. ", "Just ran into this working on data lib that's attempting to achieve common interfaces across hf datasets, webdataset, native torch style datasets. The defacto standards for image tensor...
2023-11-09T16:02:15
2024-04-11T12:40:16
2024-04-11T12:40:16
### Describe the bug Using .set_format("torch") leads to images having shape (H, W, C), the same as in numpy. However, pytorch normally uses (C, H, W) format. Maybe I'm missing something but this makes the format a lot less useful as I then have to permute it anyways. If not using the format it is possible to ...
Modexus
https://github.com/huggingface/datasets/issues/6394
null
false
1,984,913,259
6,393
Filter occasionally hangs
closed
[ "It looks like I may not be the first to encounter this: https://github.com/huggingface/datasets/issues/3172", "Adding some more information, it seems to occur more frequently with large (millions of samples) datasets.", "More information. My code is structured as (1) load (2) map (3) filter (4) filter. It was ...
2023-11-09T06:18:30
2025-02-22T00:49:19
2025-02-22T00:49:19
### Describe the bug A call to `.filter` occasionally hangs (after the filter is complete, according to tqdm) There is a trace produced ``` Exception ignored in: <function Dataset.__del__ at 0x7efb48130c10> Traceback (most recent call last): File "/usr/lib/python3/dist-packages/datasets/arrow_dataset.py", l...
dakinggg
https://github.com/huggingface/datasets/issues/6393
null
false
1,984,369,545
6,392
`push_to_hub` is not robust to hub closing connection
closed
[ "Hi! We made some improvements to `push_to_hub` to make it more robust a couple of weeks ago but haven't published a release in the meantime, so it would help if you could install `datasets` from `main` (`pip install https://github.com/huggingface/datasets`) and let us know if this improved version of `push_to_hub`...
2023-11-08T20:44:53
2023-12-20T07:28:24
2023-12-01T17:51:34
### Describe the bug Like to #6172, `push_to_hub` will crash if Hub resets the connection and raise the following error: ``` Pushing dataset shards to the dataset hub: 32%|███▏ | 54/171 [06:38<14:23, 7.38s/it] Traceback (most recent call last): File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/...
msis
https://github.com/huggingface/datasets/issues/6392
null
false
1,984,091,776
6,391
Webdataset dataset builder
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "I added an error message if the first examples don't appear to be in webdataset format\r\n```\r\n\"The TAR archives of the dataset should be in Webdataset format, \"\r\n\"but the files in the archive don't share the same prefix or th...
2023-11-08T17:31:59
2024-05-22T16:51:08
2023-11-28T16:33:10
Allow `load_dataset` to support the Webdataset format. It allows users to download/stream data from local files or from the Hugging Face Hub. Moreover it will enable the Dataset Viewer for Webdataset datasets on HF. ## Implementation details - I added a new Webdataset builder - dataset with TAR files are n...
lhoestq
https://github.com/huggingface/datasets/pull/6391
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6391", "html_url": "https://github.com/huggingface/datasets/pull/6391", "diff_url": "https://github.com/huggingface/datasets/pull/6391.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6391.patch", "merged_at": "2023-11-28T16:33...
true
1,983,725,707
6,390
handle future deprecation argument
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-11-08T14:21:25
2023-11-21T02:10:24
2023-11-14T15:15:59
getting this error: ``` /root/miniconda3/envs/py3.10/lib/python3.10/site-packages/datasets/table.py:1387: FutureWarning: promote has been superseded by mode='default'. return cls._concat_blocks(pa_tables_to_concat_vertically, axis=0) ``` Since datasets supports arrow greater than 8.0.0, we need to handle both ...
winglian
https://github.com/huggingface/datasets/pull/6390
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6390", "html_url": "https://github.com/huggingface/datasets/pull/6390", "diff_url": "https://github.com/huggingface/datasets/pull/6390.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6390.patch", "merged_at": "2023-11-14T15:15...
true
1,983,545,744
6,389
Index 339 out of range for dataset of size 339 <-- save_to_file()
open
[ "Hi! Can you make the above reproducer self-contained by adding code that generates the data?", "I managed a workaround eventually but I don't know what it was (I made a lot of changes to seq2seq). I'll try to include generating code in the future. (If I close, I don't know if you see it. Feel free to close; I'l...
2023-11-08T12:52:09
2023-11-24T09:14:13
null
### Describe the bug When saving out some Audio() data. The data is audio recordings with associated 'sentences'. (They use the audio 'bytes' approach because they're clips within audio files). Code is below the traceback (I can't upload the voice audio/text (it's not even me)). ``` Traceback (most recent call ...
jaggzh
https://github.com/huggingface/datasets/issues/6389
null
false
1,981,136,093
6,388
How to create 3d medical imgae dataset?
open
[]
2023-11-07T11:27:36
2023-11-07T11:28:53
null
### Feature request I am newer to huggingface, after i look up `datasets` docs, I can't find how to create the dataset contains 3d medical image (ends with '.mhd', '.dcm', '.nii') ### Motivation help us to upload 3d medical dataset to huggingface! ### Your contribution I'll submit a PR if I find a way to...
QingYunA
https://github.com/huggingface/datasets/issues/6388
null
false
1,980,224,020
6,387
How to load existing downloaded dataset ?
closed
[ "Feel free to use `dataset.save_to_disk(...)`, then scp the directory containing the saved dataset and reload it on your other machine using `dataset = load_from_disk(...)`" ]
2023-11-06T22:51:44
2023-11-16T18:07:01
2023-11-16T18:07:01
Hi @mariosasko @lhoestq @katielink Thanks for your contribution and hard work. ### Feature request First, I download a dataset as normal by: ``` from datasets import load_dataset dataset = load_dataset('username/data_name', cache_dir='data') ``` The dataset format in `data` directory will be: ``` ...
liming-ai
https://github.com/huggingface/datasets/issues/6387
null
false
1,979,878,014
6,386
Formatting overhead
closed
[ "Ah I think the `line-profiler` log is off-by-one and it is in fact the `extract_batch` method that's taking forever. Will investigate further.", "I tracked it down to a quirk of my setup. Apologies." ]
2023-11-06T19:06:38
2023-11-06T23:56:12
2023-11-06T23:56:12
### Describe the bug Hi! I very recently noticed that my training time is dominated by batch formatting. Using Lightning's profilers, I located the bottleneck within `datasets.formatting.formatting` and then narrowed it down with `line-profiler`. It turns out that almost all of the overhead is due to creating new inst...
d-miketa
https://github.com/huggingface/datasets/issues/6386
null
false
1,979,308,338
6,385
Get an error when i try to concatenate the squad dataset with my own dataset
closed
[ "The `answers.text` field in the JSON dataset needs to be a list of strings, not a string.\r\n\r\nSo, here is the fixed code:\r\n```python\r\nfrom huggingface_hub import notebook_login\r\nfrom datasets import load_dataset\r\n\r\n\r\n\r\nnotebook_login(\"mymailadresse\", \"mypassword\")\r\nsquad = load_dataset(\"squ...
2023-11-06T14:29:22
2023-11-06T16:50:45
2023-11-06T16:50:45
### Describe the bug Hello, I'm new here and I need to concatenate the squad dataset with my own dataset i created. I find the following error when i try to do it: Traceback (most recent call last): Cell In[9], line 1 concatenated_dataset = concatenate_datasets([train_dataset, dataset1]) File ~\ana...
CCDXDX
https://github.com/huggingface/datasets/issues/6385
null
false
1,979,117,069
6,384
Load the local dataset folder from other place
closed
[ "Solved" ]
2023-11-06T13:07:04
2023-11-19T05:42:06
2023-11-19T05:42:05
This is from https://github.com/huggingface/diffusers/issues/5573
OrangeSodahub
https://github.com/huggingface/datasets/issues/6384
null
false
1,978,189,389
6,383
imagenet-1k downloads over and over
closed
[ "Have you solved this problem?" ]
2023-11-06T02:58:58
2024-06-12T13:15:00
2023-11-06T06:02:39
### Describe the bug What could be causing this? ``` $ python3 Python 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> from datasets import load_dataset >>> load_dataset("imagenet-1k") Downloading builder ...
seann999
https://github.com/huggingface/datasets/issues/6383
null
false
1,977,400,799
6,382
Add CheXpert dataset for vision
open
[ "Hey @SauravMaheshkar ! Just responded to your email.\r\n\r\n_For transparency, copying part of my response here:_\r\nI agree, it would be really great to have this and other BenchMD datasets easily accessible on the hub.\r\n\r\nI think the main limiting factor is that the ChexPert dataset is currently hosted on th...
2023-11-04T15:36:11
2024-01-10T11:53:52
null
### Feature request ### Name **CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison** ### Paper https://arxiv.org/abs/1901.07031 ### Data https://stanfordaimi.azurewebsites.net/datasets/8cbd9ed4-2eb9-4565-affc-111cf4f7ebe2 ### Motivation CheXpert is one of the fund...
SauravMaheshkar
https://github.com/huggingface/datasets/issues/6382
null
false
1,975,028,470
6,381
Add my dataset
closed
[ "Hi! We do not host datasets in this repo. Instead, you should use `dataset.push_to_hub` to upload the dataset to the HF Hub.", "@mariosasko could you provide me proper guide to push data on HF hub ", "You can find this info here: https://huggingface.co/docs/datasets/upload_dataset. Also, check https://huggingf...
2023-11-02T20:59:52
2023-11-08T14:37:46
2023-11-06T15:50:14
## medical data **Description:** This dataset, named "medical data," is a collection of text data from various sources, carefully curated and cleaned for use in natural language processing (NLP) tasks. It consists of a diverse range of text, including articles, books, and online content, covering topics from scienc...
keyur536
https://github.com/huggingface/datasets/pull/6381
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6381", "html_url": "https://github.com/huggingface/datasets/pull/6381", "diff_url": "https://github.com/huggingface/datasets/pull/6381.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6381.patch", "merged_at": null }
true
1,974,741,221
6,380
Fix for continuation behaviour on broken dataset archives due to starving download connections via HTTP-GET
open
[]
2023-11-02T17:28:23
2023-11-02T17:31:19
null
This PR proposes a (slightly hacky) fix for an Issue that can occur when downloading large dataset parts over unstable connections. The underlying issue is also being discussed in https://github.com/huggingface/datasets/issues/5594. Issue Symptoms & Behaviour: - Download of a large archive file during dataset down...
RuntimeRacer
https://github.com/huggingface/datasets/pull/6380
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6380", "html_url": "https://github.com/huggingface/datasets/pull/6380", "diff_url": "https://github.com/huggingface/datasets/pull/6380.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6380.patch", "merged_at": null }
true
1,974,638,850
6,379
Avoid redundant warning when encoding NumPy array as `Image`
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-11-02T16:37:58
2023-11-06T17:53:27
2023-11-02T17:08:07
Avoid a redundant warning in `encode_np_array` by removing the identity check as NumPy `dtype`s can be equal without having identical `id`s. Additionally, fix "unreachable" checks in `encode_np_array`.
mariosasko
https://github.com/huggingface/datasets/pull/6379
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6379", "html_url": "https://github.com/huggingface/datasets/pull/6379", "diff_url": "https://github.com/huggingface/datasets/pull/6379.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6379.patch", "merged_at": "2023-11-02T17:08...
true
1,973,942,770
6,378
Support pyarrow 14.0.0
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-11-02T10:25:10
2023-11-02T15:24:28
2023-11-02T15:15:44
Support `pyarrow` 14.0.0. Fix #6377 and fix #6374 (root cause). This fix is analog to a previous one: - #6175
albertvillanova
https://github.com/huggingface/datasets/pull/6378
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6378", "html_url": "https://github.com/huggingface/datasets/pull/6378", "diff_url": "https://github.com/huggingface/datasets/pull/6378.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6378.patch", "merged_at": "2023-11-02T15:15...
true
1,973,937,612
6,377
Support pyarrow 14.0.0
closed
[]
2023-11-02T10:22:08
2023-11-02T15:15:45
2023-11-02T15:15:45
Support pyarrow 14.0.0 by fixing the root cause of: - #6374 and revert: - #6375
albertvillanova
https://github.com/huggingface/datasets/issues/6377
null
false
1,973,927,468
6,376
Caching problem when deleting a dataset
closed
[ "Thanks for reporting! Can you also share the error message printed in step 5?", "I did not store it at the time but I'll try to re-do a mwe next week to get it again", "I haven't managed to reproduce this issue using a [notebook](https://colab.research.google.com/drive/1m6eduYun7pFTkigrCJAFgw0BghlbvXIL?usp=sha...
2023-11-02T10:15:58
2023-12-04T16:53:34
2023-12-04T16:53:33
### Describe the bug Pushing a dataset with n + m features to a repo which was deleted, but contained n features, will fail. ### Steps to reproduce the bug 1. Create a dataset with n features per row 2. `dataset.push_to_hub(YOUR_PATH, SPLIT, token=TOKEN)` 3. Go on the hub, delete the repo at `YOUR_PATH` 4. Update...
clefourrier
https://github.com/huggingface/datasets/issues/6376
null
false
1,973,877,879
6,375
Temporarily pin pyarrow < 14.0.0
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-11-02T09:48:58
2023-11-02T10:22:33
2023-11-02T10:11:19
Temporarily pin `pyarrow` < 14.0.0 until permanent solution is found. Hot fix #6374.
albertvillanova
https://github.com/huggingface/datasets/pull/6375
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6375", "html_url": "https://github.com/huggingface/datasets/pull/6375", "diff_url": "https://github.com/huggingface/datasets/pull/6375.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6375.patch", "merged_at": "2023-11-02T10:11...
true
1,973,857,428
6,374
CI is broken: TypeError: Couldn't cast array
closed
[]
2023-11-02T09:37:06
2023-11-02T10:11:20
2023-11-02T10:11:20
See: https://github.com/huggingface/datasets/actions/runs/6730567226/job/18293518039 ``` FAILED tests/test_table.py::test_cast_sliced_fixed_size_array_to_features - TypeError: Couldn't cast array of type fixed_size_list<item: int32>[3] to Sequence(feature=Value(dtype='int64', id=None), length=3, id=None) ```
albertvillanova
https://github.com/huggingface/datasets/issues/6374
null
false
1,973,349,695
6,373
Fix typo in `Dataset.map` docstring
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-11-02T01:36:49
2023-11-02T15:18:22
2023-11-02T10:11:38
null
bryant1410
https://github.com/huggingface/datasets/pull/6373
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6373", "html_url": "https://github.com/huggingface/datasets/pull/6373", "diff_url": "https://github.com/huggingface/datasets/pull/6373.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6373.patch", "merged_at": "2023-11-02T10:11...
true
1,972,837,794
6,372
do not try to download from HF GCS for generator
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-11-01T17:57:11
2023-11-02T16:02:52
2023-11-02T15:52:09
attempt to fix https://github.com/huggingface/datasets/issues/6371
yundai424
https://github.com/huggingface/datasets/pull/6372
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6372", "html_url": "https://github.com/huggingface/datasets/pull/6372", "diff_url": "https://github.com/huggingface/datasets/pull/6372.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6372.patch", "merged_at": "2023-11-02T15:52...
true
1,972,807,579
6,371
`Dataset.from_generator` should not try to download from HF GCS
closed
[ "Indeed, setting `try_from_gcs` to `False` makes sense for `from_generator`.\r\n\r\nWe plan to deprecate and remove `try_from_hf_gcs` soon, as we can use Hub for file hosting now, but this is a good temporary fix.\r\n" ]
2023-11-01T17:36:17
2023-11-02T15:52:10
2023-11-02T15:52:10
### Describe the bug When using [`Dataset.from_generator`](https://github.com/huggingface/datasets/blob/c9c1166e1cf81d38534020f9c167b326585339e5/src/datasets/arrow_dataset.py#L1072) with `streaming=False`, the internal logic will call [`download_and_prepare`](https://github.com/huggingface/datasets/blob/main/src/datas...
yundai424
https://github.com/huggingface/datasets/issues/6371
null
false
1,972,073,909
6,370
TensorDataset format does not work with Trainer from transformers
closed
[ "I figured it out. I found that `Trainer` does not work with TensorDataset even though the document says it uses it. Instead, I ended up creating a dictionary and converting it to a dataset using `dataset.Dataset.from_dict()`.\r\n\r\nI will leave this post open for a while. If someone knows a better approach, pleas...
2023-11-01T10:09:54
2023-11-29T16:31:08
2023-11-29T16:31:08
### Describe the bug The model was built to do fine tunning on BERT model for relation extraction. trainer.train() returns an error message ```TypeError: vars() argument must have __dict__ attribute``` when it has `train_dataset` generated from `torch.utils.data.TensorDataset` However, in the document, the req...
jinzzasol
https://github.com/huggingface/datasets/issues/6370
null
false
1,971,794,108
6,369
Multi process map did not load cache file correctly
closed
[ "The inconsistency may be caused by the usage of \"update_fingerprint\" and setting \"trust_remote_code\" to \"True.\"\r\nWhen the tokenizer employs \"trust_remote_code,\" the behavior of the map function varies with each code execution. Even if the remote code of the tokenizer remains the same, the result of \"ash...
2023-11-01T06:36:54
2023-11-30T16:04:46
2023-11-30T16:04:45
### Describe the bug When I was training model on Multiple GPUs by DDP, the dataset is tokenized multiple times after main process. ![1698820541284](https://github.com/huggingface/datasets/assets/14285786/0b2fe054-54d8-4e00-96e6-6ca5b69e662b) ![1698820501568](https://github.com/huggingface/datasets/assets/142857...
enze5088
https://github.com/huggingface/datasets/issues/6369
null
false
1,971,193,692
6,368
Fix python formatting for complex types in `format_table`
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-10-31T19:48:08
2023-11-02T14:42:28
2023-11-02T14:21:16
Fix #6366
mariosasko
https://github.com/huggingface/datasets/pull/6368
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6368", "html_url": "https://github.com/huggingface/datasets/pull/6368", "diff_url": "https://github.com/huggingface/datasets/pull/6368.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6368.patch", "merged_at": "2023-11-02T14:21...
true
1,971,015,861
6,367
Fix time measuring snippet in docs
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-10-31T17:57:17
2023-10-31T18:35:53
2023-10-31T18:24:02
Fix https://discuss.huggingface.co/t/attributeerror-enter/60509
mariosasko
https://github.com/huggingface/datasets/pull/6367
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6367", "html_url": "https://github.com/huggingface/datasets/pull/6367", "diff_url": "https://github.com/huggingface/datasets/pull/6367.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6367.patch", "merged_at": "2023-10-31T18:24...
true
1,970,213,490
6,366
with_format() function returns bytes instead of PIL images even when image column is not part of "columns"
closed
[ "Thanks for reporting! I've opened a PR with a fix." ]
2023-10-31T11:10:48
2023-11-02T14:21:17
2023-11-02T14:21:17
### Describe the bug When using the with_format() function on a dataset containing images, even if the image column is not part of the columns provided in the function, its type will be changed to bytes. Here is a minimal reproduction of the bug: https://colab.research.google.com/drive/1hyaOspgyhB41oiR1-tXE3k_gJCdJU...
leot13
https://github.com/huggingface/datasets/issues/6366
null
false
1,970,140,392
6,365
Parquet size grows exponential for categorical data
closed
[ "Wrong repo." ]
2023-10-31T10:29:02
2023-10-31T10:49:17
2023-10-31T10:49:17
### Describe the bug It seems that when saving a data frame with a categorical column inside the size can grow exponentially. This seems to happen because when we save the categorical data to parquet, we are saving the data + all the categories existing in the original data. This happens even when the categories ar...
aseganti
https://github.com/huggingface/datasets/issues/6365
null
false
1,969,136,106
6,364
ArrowNotImplementedError: Unsupported cast from string to list using function cast_list
closed
[ "You can use the following code to load this CSV with the list values preserved:\r\n```python\r\nfrom datasets import load_dataset\r\nimport ast\r\n\r\nconverters = {\r\n \"contexts\" : ast.literal_eval,\r\n \"ground_truths\" : ast.literal_eval,\r\n}\r\n\r\nds = load_dataset(\"csv\", data_files=\"golden_datas...
2023-10-30T20:14:01
2023-10-31T19:21:23
2023-10-31T19:21:23
Hi, I am trying to load a local csv dataset(similar to explodinggradients_fiqa) using load_dataset. When I try to pass features, I am facing the mentioned issue. CSV Data sample(golden_dataset.csv): Question | Context | answer | groundtruth "what is abc?"...
divyakrishna-devisetty
https://github.com/huggingface/datasets/issues/6364
null
false
1,968,891,277
6,363
dataset.transform() hangs indefinitely while finetuning the stable diffusion XL
closed
[ "I think the code hangs on the `accelerator.main_process_first()` context manager exit. To verify this, you can append a print statement to the end of the `accelerator.main_process_first()` block. \r\n\r\n\r\nIf the problem is in `with_transform`, it would help if you could share the error stack trace printed when...
2023-10-30T17:34:05
2023-11-22T00:29:21
2023-11-22T00:29:21
### Describe the bug Multi-GPU fine-tuning the stable diffusion X by following https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/README_sdxl.md hangs indefinitely. ### Steps to reproduce the bug accelerate launch train_text_to_image_sdxl.py --pretrained_model_name_or_path=$MODEL_NAME --...
bhosalems
https://github.com/huggingface/datasets/issues/6363
null
false
1,965,794,569
6,362
Simplify filesystem logic
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-10-27T15:54:18
2023-11-15T14:08:29
2023-11-15T14:02:02
Simplifies the existing filesystem logic (e.g., to avoid unnecessary if-else as mentioned in https://github.com/huggingface/datasets/pull/6098#issue-1827655071)
mariosasko
https://github.com/huggingface/datasets/pull/6362
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6362", "html_url": "https://github.com/huggingface/datasets/pull/6362", "diff_url": "https://github.com/huggingface/datasets/pull/6362.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6362.patch", "merged_at": "2023-11-15T14:02...
true
1,965,672,950
6,360
Add support for `Sequence(Audio/Image)` feature in `push_to_hub`
closed
[ "This issue stems from https://github.com/huggingface/datasets/blob/6d2f2a5e0fea3827eccfd1717d8021c15fc4292a/src/datasets/table.py#L2203-L2205\r\n\r\nI'll address it as part of https://github.com/huggingface/datasets/pull/6283.\r\n\r\nIn the meantime, this should work\r\n\r\n```python\r\nimport pyarrow as pa\r\nfro...
2023-10-27T14:39:57
2024-02-06T19:24:20
2024-02-06T19:24:20
### Feature request Allow for `Sequence` of `Image` (or `Audio`) to be embedded inside the shards. ### Motivation Currently, thanks to #3685, when `embed_external_files` is set to True (which is the default) in `push_to_hub`, features of type `Image` and `Audio` are embedded inside the arrow/parquet shards, instead ...
Laurent2916
https://github.com/huggingface/datasets/issues/6360
null
false
1,965,378,583
6,359
Stuck in "Resolving data files..."
open
[ "Most likely, the data file inference logic is the problem here.\r\n\r\nYou can run the following code to verify this:\r\n```python\r\nimport time\r\nfrom datasets.data_files import get_data_patterns\r\nstart_time = time.time()\r\nget_data_patterns(\"/path/to/img_dir\")\r\nend_time = time.time()\r\nprint(f\"Elapsed...
2023-10-27T12:01:51
2025-03-09T02:18:19
null
### Describe the bug I have an image dataset with 300k images, the size of image is 768 * 768. When I run `dataset = load_dataset("imagefolder", data_dir="/path/to/img_dir", split='train')` in second time, it takes 50 minutes to finish "Resolving data files" part, what's going on in this part? From my understa...
Luciennnnnnn
https://github.com/huggingface/datasets/issues/6359
null
false
1,965,014,595
6,358
Mounting datasets cache fails due to absolute paths.
closed
[ "You may be able to make it work by tweaking some environment variables, such as [`HF_HOME`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables#hfhome) or [`HF_DATASETS_CACHE`](https://huggingface.co/docs/datasets/cache#cache-directory).", "> You may be able to make it wor...
2023-10-27T08:20:27
2024-04-10T08:50:06
2023-11-28T14:47:12
### Describe the bug Creating a datasets cache and mounting this into, for example, a docker container, renders the data unreadable due to absolute paths written into the cache. ### Steps to reproduce the bug 1. Create a datasets cache by downloading some data 2. Mount the dataset folder into a docker contain...
charliebudd
https://github.com/huggingface/datasets/issues/6358
null
false
1,964,653,995
6,357
Allow passing a multiprocessing context to functions that support `num_proc`
open
[]
2023-10-27T02:31:16
2023-10-27T02:31:16
null
### Feature request Allow specifying [a multiprocessing context](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) to functions that support `num_proc` or use multiprocessing pools. For example, the following could be done: ```python dataset = dataset.map(_func, num_proc=2, mp_cont...
bryant1410
https://github.com/huggingface/datasets/issues/6357
null
false
1,964,015,802
6,356
Add `fsspec` version to the `datasets-cli env` command output
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-10-26T17:19:25
2023-10-26T18:42:56
2023-10-26T18:32:21
... to make debugging issues easier, as `fsspec`'s releases often introduce breaking changes.
mariosasko
https://github.com/huggingface/datasets/pull/6356
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6356", "html_url": "https://github.com/huggingface/datasets/pull/6356", "diff_url": "https://github.com/huggingface/datasets/pull/6356.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6356.patch", "merged_at": "2023-10-26T18:32...
true
1,963,979,896
6,355
More hub centric docs
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-10-26T16:54:46
2024-01-11T06:34:16
2023-10-30T17:32:57
Let's have more hub-centric documentation in the datasets docs Tutorials - Add “Configure the dataset viewer” page - Change order: - Overview - and more focused on the Hub rather than the library - Then all the hub related things - and mention how to read/write with other tools like pandas - The...
lhoestq
https://github.com/huggingface/datasets/pull/6355
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6355", "html_url": "https://github.com/huggingface/datasets/pull/6355", "diff_url": "https://github.com/huggingface/datasets/pull/6355.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6355.patch", "merged_at": null }
true
1,963,483,324
6,354
`IterableDataset.from_spark` does not support multiple workers in pytorch `Dataloader`
open
[ "I am having issues as well with this. \r\n\r\nHowever, the error I am getting is :\r\n`RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more informati...
2023-10-26T12:43:36
2024-12-10T14:06:06
null
### Describe the bug Looks like `IterableDataset.from_spark` does not support multiple workers in pytorch `Dataloader` if I'm not missing anything. Also, returns not consistent error messages, which probably depend on the nondeterministic order of worker executions Some exampes I've encountered: ``` File "/l...
NazyS
https://github.com/huggingface/datasets/issues/6354
null
false
1,962,646,450
6,353
load_dataset save_to_disk load_from_disk error
closed
[ "solved.\r\nfsspec version problem", "I'm using the latest datasets and fsspec , but still got this error!\r\n\r\ndatasets : Version: 2.13.0\r\n\r\nfsspec Version: 2023.10.0\r\n\r\n```\r\nFile \"/home/guoby/app/Anaconda3-2021.05/envs/news/lib/python3.8/site-packages/datasets/load.py\", line 1892, in load_from_...
2023-10-26T03:47:06
2024-04-03T05:31:01
2023-10-26T10:18:04
### Describe the bug datasets version: 2.10.1 I `load_dataset `and `save_to_disk` sucessfully on windows10( **and I `load_from_disk(/LLM/data/wiki)` succcesfully on windows10**), and I copy the dataset `/LLM/data/wiki` into a ubuntu system, but when I `load_from_disk(/LLM/data/wiki)` on ubuntu, something weird ha...
brisker
https://github.com/huggingface/datasets/issues/6353
null
false
1,962,296,057
6,352
Error loading wikitext data raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.")
closed
[ "+1 \r\n```\r\nFound cached dataset csv (file:///home/ubuntu/.cache/huggingface/datasets/theSquarePond___csv/theSquarePond--XXXXX-bbf0a8365d693d2c/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d)\r\n---------------------------------------------------------------------------\r\nNotImplementedE...
2023-10-25T21:55:31
2024-03-19T16:46:22
2023-11-07T07:26:54
I was trying to load the wiki dataset, but i got this error traindata = load_dataset('wikitext', 'wikitext-2-raw-v1', split='train') File "/home/aelkordy/.conda/envs/prune_llm/lib/python3.9/site-packages/datasets/load.py", line 1804, in load_dataset ds = builder_instance.as_dataset(split=split, verific...
Ahmed-Roushdy
https://github.com/huggingface/datasets/issues/6352
null
false
1,961,982,988
6,351
Fix use_dataset.mdx
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-10-25T18:21:08
2023-10-26T17:19:49
2023-10-26T17:10:27
The current example isn't working because it can't find `labels` inside the Dataset object. So I've added an extra step to the process. Tested and working in Colab.
angel-luis
https://github.com/huggingface/datasets/pull/6351
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6351", "html_url": "https://github.com/huggingface/datasets/pull/6351", "diff_url": "https://github.com/huggingface/datasets/pull/6351.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6351.patch", "merged_at": "2023-10-26T17:10...
true
1,961,869,203
6,350
Different objects are returned from calls that should be returning the same kind of object.
open
[ "`load_dataset` returns a `DatasetDict` object unless `split` is defined, in which case it returns a `Dataset` (or a list of datasets if `split` is a list). We've discussed dropping `DatasetDict` from the API in https://github.com/huggingface/datasets/issues/5189 to always return the same type in `load_dataset` an...
2023-10-25T17:08:39
2023-10-26T21:03:06
null
### Describe the bug 1. dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=training_args.cache_dir, split='train[:1%]') 2. dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=training_args.cache_dir) The only difference I would expect these cal...
phalexo
https://github.com/huggingface/datasets/issues/6350
null
false
1,961,435,673
6,349
Can't load ds = load_dataset("imdb")
closed
[ "I'm unable to reproduce this error. The server hosting the files may have been down temporarily, so try again.", "getting the same error", "I am getting the following error:\r\nEnv: Python3.10\r\ndatasets: 2.10.1\r\nLinux: Amazon Linux2\r\n\r\n`Traceback (most recent call last):\r\n File \"<stdin>\", line 1, ...
2023-10-25T13:29:51
2024-03-20T15:09:53
2023-10-31T19:59:35
### Describe the bug I did `from datasets import load_dataset, load_metric` and then `ds = load_dataset("imdb")` and it gave me the error: ExpectedMoreDownloadedFiles: {'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'} I tried doing `ds = load_dataset("imdb",download_mode="force_redownload")` as we...
vivianc2
https://github.com/huggingface/datasets/issues/6349
null
false