id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
1,698,155,751
5,826
Support working_dir in from_spark
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Added env var", "@lhoestq would you or another maintainer be able to review please? :)", "I removed the env var", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!<...
2023-05-05T20:22:40
2023-05-25T17:45:54
2023-05-25T08:46:15
Accept `working_dir` as an argument to `Dataset.from_spark`. Setting a non-NFS working directory for Spark workers to materialize to will improve write performance.
maddiedawson
https://github.com/huggingface/datasets/pull/5826
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5826", "html_url": "https://github.com/huggingface/datasets/pull/5826", "diff_url": "https://github.com/huggingface/datasets/pull/5826.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5826.patch", "merged_at": "2023-05-25T08:46...
true
1,697,327,483
5,825
FileNotFound even though exists
closed
[ "Hi! \r\n\r\nThis would only work if `bigscience/xP3` was a no-code dataset, but it isn't (it has a Python builder script).\r\n\r\nBut this should work: \r\n```python\r\nload_dataset(\"json\", data_files=\"https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_a...
2023-05-05T09:49:55
2023-08-16T10:02:01
2023-08-16T10:02:01
### Describe the bug I'm trying to download https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl which works fine in my webbrowser, but somehow not with datasets. Am I doing sth wrong? ``` Downloading builder script: 100% 2.82k/2.8...
Muennighoff
https://github.com/huggingface/datasets/issues/5825
null
false
1,697,152,148
5,824
Fix incomplete docstring for `BuilderConfig`
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-05-05T07:34:28
2023-05-05T12:39:14
2023-05-05T12:31:54
Fixes #5820 Also fixed a couple of typos I spotted
Laurent2916
https://github.com/huggingface/datasets/pull/5824
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5824", "html_url": "https://github.com/huggingface/datasets/pull/5824", "diff_url": "https://github.com/huggingface/datasets/pull/5824.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5824.patch", "merged_at": "2023-05-05T12:31...
true
1,697,024,789
5,823
[2.12.0] DatasetDict.save_to_disk not saving to S3
closed
[ "Hi ! Can you try adding the `s3://` prefix ?\r\n```python\r\nf\"s3://{s3_bucket}/{s3_dir}/{dataset_name}\"\r\n```", "Ugh, yeah that was it. Thank you!", "Hi @thejamesmarq, by any chance, did you use multiprocessing `num_proc > 1` when saving your dataset on the s3 bucket ? I'm struggling making it work in a mu...
2023-05-05T05:22:59
2024-05-30T16:11:31
2023-05-05T15:01:17
### Describe the bug When trying to save a `DatasetDict` to a private S3 bucket using `save_to_disk`, the artifacts are instead saved locally, and not in the S3 bucket. I have tried using the deprecated `fs` as well as the `storage_options` arguments and I get the same results. ### Steps to reproduce the bug 1. C...
thejamesmarq
https://github.com/huggingface/datasets/issues/5823
null
false
1,696,627,308
5,822
Audio Dataset with_format torch problem
closed
[ "Hi ! Can you try with a more recent version of `datasets` ?", "Ok, yes it worked with the most recent version. Thanks" ]
2023-05-04T20:07:51
2023-05-11T20:45:53
2023-05-11T20:45:53
### Describe the bug Common Voice v10 Delta (German) Dataset from here https://commonvoice.mozilla.org/de/datasets ``` audio_dataset = \ (Dataset .from_dict({"audio": ('/tmp/cv-corpus-10.0-delta-2022-07-04/de/clips/' + df.path).to_list()}) .cast_column("audio", Audio(sampling_rate=16_000)) .with...
paulbauriegel
https://github.com/huggingface/datasets/issues/5822
null
false
1,696,400,343
5,821
IterableDataset Arrow formatting
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-05-04T17:23:43
2023-05-31T09:43:26
2023-05-31T09:36:18
Adding an optional `.iter_arrow` to examples iterable. This allows to use Arrow formatting in map/filter. This will also be useful for torch formatting, since we can reuse the TorchFormatter that converts Arrow data to torch tensors Related to https://github.com/huggingface/datasets/issues/5793 and https://github...
lhoestq
https://github.com/huggingface/datasets/pull/5821
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5821", "html_url": "https://github.com/huggingface/datasets/pull/5821", "diff_url": "https://github.com/huggingface/datasets/pull/5821.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5821.patch", "merged_at": "2023-05-31T09:36...
true
1,695,892,811
5,820
Incomplete docstring for `BuilderConfig`
closed
[ "Thanks for reporting! You are more than welcome to improve `BuilderConfig`'s docstring.\r\n\r\nThis class serves an identical purpose as `tensorflow_datasets`'s `BuilderConfig`, and its docstring is [here](https://github.com/tensorflow/datasets/blob/a95e38b5bb018312c3d3720619c2a8ef83ebf57f/tensorflow_datasets/core...
2023-05-04T12:14:34
2023-05-05T12:31:56
2023-05-05T12:31:56
Hi guys ! I stumbled upon this docstring while working on a project. Some of the attributes have missing descriptions. https://github.com/huggingface/datasets/blob/bc5fef5b6d91f009e4101684adcb374df2c170f6/src/datasets/builder.py#L104-L117
Laurent2916
https://github.com/huggingface/datasets/issues/5820
null
false
1,695,536,738
5,819
Cannot pickle error in Dataset.from_generator()
closed
[ "Hi! It should work if you put `model = torch.compile(model)` inside the `generate_data` function. If a referenced object is outside, it needs to be pickable, and that's not the case for the compiled models (or functions). ", "> Hi! It should work if you put `model = torch.compile(model)` inside the `generate_da...
2023-05-04T08:39:09
2023-05-05T19:20:59
2023-05-05T19:20:58
### Describe the bug I'm trying to use Dataset.from_generator() to generate a large dataset. ### Steps to reproduce the bug Code to reproduce: ``` from transformers import T5Tokenizer, T5ForConditionalGeneration, GenerationConfig import torch from tqdm import tqdm from datasets import load_dataset tokenizer...
xinghaow99
https://github.com/huggingface/datasets/issues/5819
null
false
1,695,052,555
5,818
Ability to update a dataset
open
[ "This [reply](https://discuss.huggingface.co/t/how-do-i-add-things-rows-to-an-already-saved-dataset/27423) from @mariosasko on the forums may be useful :)", "In this case, I think we can avoid the `PermissionError` by unpacking the underlying `ConcatenationTable` and saving only the newly added data blocks (in ne...
2023-05-04T01:08:13
2023-05-04T20:43:39
null
### Feature request The ability to load a dataset, add or change something, and save it back to disk. Maybe it's possible, but I can't work out how to do it, e.g. this fails: ```py import datasets dataset = datasets.load_from_disk("data/test1") dataset = dataset.add_item({"text": "A new item"}) dataset.sav...
davidgilbertson
https://github.com/huggingface/datasets/issues/5818
null
false
1,694,891,866
5,817
Setting `num_proc` errors when `.map` returns additional items.
closed
[ "Hi ! Unfortunately I couldn't reproduce on my side locally and with datasets 2.11 and python 3.10.11 on colab.\r\nWhat version of `multiprocess` are you using ?", "I've got `multiprocess` version `0.70.14`.\r\n\r\nI've done some more testing and the error only occurs in PyCharm's Python Console. It seems to be [...
2023-05-03T21:46:53
2023-05-04T21:14:21
2023-05-04T20:22:25
### Describe the bug I'm using a map function that returns more rows than are passed in. If I try to use `num_proc` I get: ``` File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 563, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kw...
davidgilbertson
https://github.com/huggingface/datasets/issues/5817
null
false
1,694,590,856
5,816
Preserve `stopping_strategy` of shuffled interleaved dataset (random cycling case)
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-05-03T18:34:18
2023-05-04T14:31:55
2023-05-04T14:24:49
Preserve the `stopping_strategy` in the `RandomlyCyclingMultiSourcesExamplesIterable.shard_data_sources` to fix shuffling a dataset interleaved (from multiple sources) with probabilities. Fix #5812
mariosasko
https://github.com/huggingface/datasets/pull/5816
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5816", "html_url": "https://github.com/huggingface/datasets/pull/5816", "diff_url": "https://github.com/huggingface/datasets/pull/5816.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5816.patch", "merged_at": "2023-05-04T14:24...
true
1,693,216,778
5,814
Repro windows crash
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5814). All of your documentation changes will be reflected on that endpoint." ]
2023-05-02T23:30:18
2024-01-08T18:30:45
2024-01-08T18:30:45
null
maddiedawson
https://github.com/huggingface/datasets/pull/5814
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5814", "html_url": "https://github.com/huggingface/datasets/pull/5814", "diff_url": "https://github.com/huggingface/datasets/pull/5814.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5814.patch", "merged_at": null }
true
1,693,701,743
5,815
Easy way to create a Kaggle dataset from a Huggingface dataset?
open
[ "Hi @hrbigelow , I'm no expert for such a question so I'll ping @lhoestq from the `datasets` library (also this issue could be moved there if someone with permission can do it :) )", "Hi ! Many datasets are made of several files, and how they are parsed often requires a python script. Because of that, datasets li...
2023-05-02T21:43:33
2023-07-26T16:13:31
null
I'm not sure whether this is more appropriately addressed with HuggingFace or Kaggle. I would like to somehow directly create a Kaggle dataset from a HuggingFace Dataset. While Kaggle does provide the option to create a dataset from a URI, that URI must point to a single file. For example: ![image](https://user...
hrbigelow
https://github.com/huggingface/datasets/issues/5815
null
false
1,691,908,535
5,813
[DO-NOT-MERGE] Debug Windows issue at #3
closed
[]
2023-05-02T07:19:34
2023-05-02T07:21:30
2023-05-02T07:21:30
TBD
HyukjinKwon
https://github.com/huggingface/datasets/pull/5813
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5813", "html_url": "https://github.com/huggingface/datasets/pull/5813", "diff_url": "https://github.com/huggingface/datasets/pull/5813.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5813.patch", "merged_at": null }
true
1,691,798,169
5,812
Cannot shuffle interleaved IterableDataset with "all_exhausted" stopping strategy
closed
[]
2023-05-02T05:26:17
2023-05-04T14:24:51
2023-05-04T14:24:51
### Describe the bug Shuffling interleaved `IterableDataset` with "all_exhausted" strategy yields non-exhaustive sampling. ### Steps to reproduce the bug ```py from datasets import IterableDataset, interleave_datasets def gen(bias, length): for i in range(length): yield dict(a=bias+i) seed = 42 ...
offchan42
https://github.com/huggingface/datasets/issues/5812
null
false
1,689,919,046
5,811
load_dataset: TypeError: 'NoneType' object is not callable, on local dataset filename changes
open
[ "This error means a `DatasetBuilder` subclass that generates the dataset could not be found inside the script, so make sure `dushowxa-characters/dushowxa-characters.py `is a valid dataset script (assuming `path_or_dataset` is `dushowxa-characters`)\r\n\r\nAlso, we should improve the error to make it more obvious wh...
2023-04-30T13:27:17
2025-02-27T07:32:30
null
### Describe the bug I've adapted Databrick's [train_dolly.py](/databrickslabs/dolly/blob/master/train_dolly.py) to train using a local dataset, which has been working. Upon changing the filenames of the `.json` & `.py` files in my local dataset directory, `dataset = load_dataset(path_or_dataset)["train"]` throws th...
durapensa
https://github.com/huggingface/datasets/issues/5811
null
false
1,689,917,822
5,810
Add `fn_kwargs` to `map` and `filter` of `IterableDataset` and `IterableDatasetDict`
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Sorry, the local test passed because it was inadvertently testing the main branch. I am currently fixing where the test failed.", "- I have fixed the bug and addressed the above two points.\r\n- I have tested locally and confirmed ...
2023-04-30T13:23:01
2023-05-22T08:12:39
2023-05-22T08:05:31
# Overview I've added an argument`fn_kwargs` for map and filter methods of `IterableDataset` and `IterableDatasetDict` classes. # Details Currently, the map and filter methods of some classes related to `IterableDataset` do not allow specifing the arguments passed to the function. This pull request adds `fn_kwargs...
yuukicammy
https://github.com/huggingface/datasets/pull/5810
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5810", "html_url": "https://github.com/huggingface/datasets/pull/5810", "diff_url": "https://github.com/huggingface/datasets/pull/5810.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5810.patch", "merged_at": "2023-05-22T08:05...
true
1,689,797,293
5,809
wiki_dpr details for Open Domain Question Answering tasks
closed
[ "Hi ! I don't remember exactly how it was done, but maybe you have to embed `f\"{title}<sep>{text}\"` ?\r\n\r\nUsing a HF tokenizer it corresponds to doing\r\n```python\r\ntokenized = tokenizer(titles, texts)\r\n```" ]
2023-04-30T06:12:04
2023-07-21T14:11:00
2023-07-21T14:11:00
Hey guys! Thanks for creating the wiki_dpr dataset! I am currently trying to combine wiki_dpr and my own datasets. but I don't know how to make the embedding value the same way as wiki_dpr. As an experiment, I embeds the text of id="7" of wiki_dpr, but this result was very different from wiki_dpr.
yulgok22
https://github.com/huggingface/datasets/issues/5809
null
false
1,688,977,237
5,807
Support parallelized downloading in load_dataset with Spark
closed
[ "Hi @lhoestq or other maintainers, this is ready for review, could you please take a look?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5807). All of your documentation changes will be reflected on that endpoint.", "Per the discussion in #5798, will implement with `jo...
2023-04-28T18:34:32
2023-05-25T16:54:14
2023-05-25T16:54:14
As proposed in https://github.com/huggingface/datasets/issues/5798, this adds support to parallelized downloading in `load_dataset` with Spark, which can speed up the process by distributing the workload to worker nodes. Parallelizing dataset processing is not supported in this PR.
es94129
https://github.com/huggingface/datasets/pull/5807
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5807", "html_url": "https://github.com/huggingface/datasets/pull/5807", "diff_url": "https://github.com/huggingface/datasets/pull/5807.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5807.patch", "merged_at": null }
true
1,688,598,095
5,806
Return the name of the currently loaded file in the load_dataset function.
open
[ "Implementing this makes sense (e.g., `tensorflow_datasets`' imagefolder returns image filenames). Also, in Datasets 3.0, we plan only to store the bytes of an image/audio, not its path, so this feature would be useful when the path info is still needed.", "Hey @mariosasko, Can I work on this issue, this one seem...
2023-04-28T13:50:15
2025-08-10T05:26:27
null
### Feature request Add an optional parameter return_file_name in the load_dataset function. When it is set to True, the function will include the name of the file corresponding to the current line as a feature in the returned output. ### Motivation When training large language models, machine problems may interrupt...
s-JoL
https://github.com/huggingface/datasets/issues/5806
null
false
1,688,558,577
5,805
Improve `Create a dataset` tutorial
open
[ "I can work on this. The link to the tutorial seems to be broken though @polinaeterna. ", "@isunitha98selvan would be great, thank you! which link are you talking about? I think it should work: https://huggingface.co/docs/datasets/create_dataset", "Hey I don't mind working on this issue. From my understanding, ...
2023-04-28T13:26:22
2024-07-26T21:16:13
null
Our [tutorial on how to create a dataset](https://huggingface.co/docs/datasets/create_dataset) is a bit misleading. 1. In **Folder-based builders** section it says that we have two folder-based builders as standard builders, but we also have similar builders (that can be created from directory with data of required f...
polinaeterna
https://github.com/huggingface/datasets/issues/5805
null
false
1,688,285,666
5,804
Set dev version
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5804). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma...
2023-04-28T10:10:01
2023-04-28T10:18:51
2023-04-28T10:10:29
null
lhoestq
https://github.com/huggingface/datasets/pull/5804
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5804", "html_url": "https://github.com/huggingface/datasets/pull/5804", "diff_url": "https://github.com/huggingface/datasets/pull/5804.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5804.patch", "merged_at": "2023-04-28T10:10...
true
1,688,256,290
5,803
Release: 2.12.0
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5803). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma...
2023-04-28T09:52:11
2023-04-28T10:18:56
2023-04-28T09:54:43
null
lhoestq
https://github.com/huggingface/datasets/pull/5803
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5803", "html_url": "https://github.com/huggingface/datasets/pull/5803", "diff_url": "https://github.com/huggingface/datasets/pull/5803.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5803.patch", "merged_at": "2023-04-28T09:54...
true
1,686,509,799
5,802
Validate non-empty data_files
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-27T09:51:36
2023-04-27T14:59:47
2023-04-27T14:51:40
This PR adds validation of `data_files`, so that they are non-empty (str, list, or dict) or `None` (default). See: https://github.com/huggingface/datasets/pull/5787#discussion_r1178862327
albertvillanova
https://github.com/huggingface/datasets/pull/5802
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5802", "html_url": "https://github.com/huggingface/datasets/pull/5802", "diff_url": "https://github.com/huggingface/datasets/pull/5802.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5802.patch", "merged_at": "2023-04-27T14:51...
true
1,686,348,096
5,800
Change downloaded file permission based on umask
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2023-04-27T08:13:30
2023-04-27T09:33:05
2023-04-27T09:30:16
This PR changes the permission of downloaded files to cache, so that the umask is taken into account. Related to: - #2157 Fix #5799. CC: @stas00
albertvillanova
https://github.com/huggingface/datasets/pull/5800
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5800", "html_url": "https://github.com/huggingface/datasets/pull/5800", "diff_url": "https://github.com/huggingface/datasets/pull/5800.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5800.patch", "merged_at": "2023-04-27T09:30...
true
1,686,334,572
5,799
Files downloaded to cache do not respect umask
closed
[]
2023-04-27T08:06:05
2023-04-27T09:30:17
2023-04-27T09:30:17
As reported by @stas00, files downloaded to the cache do not respect umask: ```bash $ ls -l /path/to/cache/datasets/downloads/ -rw------- 1 uername username 150M Apr 25 16:41 5e646c1d600f065adaeb134e536f6f2f296a6d804bd1f0e1fdcd20ee28c185c6 ``` Related to: - #2065
albertvillanova
https://github.com/huggingface/datasets/issues/5799
null
false
1,685,904,526
5,798
Support parallelized downloading and processing in load_dataset with Spark
open
[ "Hi ! We're using process pools for parallelism right now. I was wondering if there's a package that implements the same API as a process pool but runs with Spark under the hood ? That or something similar would be cool because users could use whatever distributed framework they want this way.\r\n\r\nFeel free to p...
2023-04-27T00:16:11
2023-05-25T14:11:41
null
### Feature request When calling `load_dataset` for datasets that have multiple files, support using Spark to distribute the downloading and processing job to worker nodes when `cache_dir` is a cloud file system shared among nodes. ```python load_dataset(..., use_spark=True) ``` ### Motivation Further speed up ...
es94129
https://github.com/huggingface/datasets/issues/5798
null
false
1,685,501,199
5,797
load_dataset is case sentitive?
open
[ "Hi @haonan-li , thank you for the report! It seems to be a bug on the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) site, there is even no such dataset as `mbzuai/bactrian-x` on the Hub. I opened and [issue](https://github.com/huggingface/huggingface_hub/issues/1453) there.", "I think `loa...
2023-04-26T18:19:04
2023-04-27T11:56:58
null
### Describe the bug load_dataset() function is case sensitive? ### Steps to reproduce the bug The following two code, get totally different behavior. 1. load_dataset('mbzuai/bactrian-x','en') 2. load_dataset('MBZUAI/Bactrian-X','en') ### Expected behavior Compare 1 and 2. 1 will download all 52 subsets, sh...
haonan-li
https://github.com/huggingface/datasets/issues/5797
null
false
1,685,451,919
5,796
Spark docs
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-26T17:39:43
2023-04-27T16:41:50
2023-04-27T16:34:45
Added a "Use with Spark" doc page to document `Dataset.from_spark` following https://github.com/huggingface/datasets/pull/5701 cc @maddiedawson
lhoestq
https://github.com/huggingface/datasets/pull/5796
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5796", "html_url": "https://github.com/huggingface/datasets/pull/5796", "diff_url": "https://github.com/huggingface/datasets/pull/5796.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5796.patch", "merged_at": "2023-04-27T16:34...
true
1,685,414,505
5,795
Fix spark imports
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-26T17:09:32
2023-04-26T17:49:03
2023-04-26T17:39:12
null
lhoestq
https://github.com/huggingface/datasets/pull/5795
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5795", "html_url": "https://github.com/huggingface/datasets/pull/5795", "diff_url": "https://github.com/huggingface/datasets/pull/5795.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5795.patch", "merged_at": "2023-04-26T17:39...
true
1,685,196,061
5,794
CI ZeroDivisionError
closed
[ "Hello!\r\nThis issue seems to have been fixed in https://github.com/huggingface/transformers/pull/24049 \r\nI was looking for my first issue to work on when I noticed this; not sure if there is a specific protocol for suggesting to close an issue.", "Thanks for informing, @zeppdev. I am closing this issue.\r\n\r...
2023-04-26T14:55:23
2024-05-17T09:12:11
2024-05-17T09:12:11
Sometimes when running our CI on Windows, we get a ZeroDivisionError: ``` FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore - ZeroDivisionError: float division by zero ``` See for example: - https://github.com/huggingface/datasets/actions/runs/4809358266/jobs/8560513110 - https:/...
albertvillanova
https://github.com/huggingface/datasets/issues/5794
null
false
1,684,777,320
5,793
IterableDataset.with_format("torch") not working
closed
[ "Hi ! Thanks for reporting, I'm working on it ;)" ]
2023-04-26T10:50:23
2023-06-13T15:57:06
2023-06-13T15:57:06
### Describe the bug After calling the with_format("torch") method on an IterableDataset instance, the data format is unchanged. ### Steps to reproduce the bug ```python from datasets import IterableDataset def gen(): for i in range(4): yield {"a": [i] * 4} dataset = IterableDataset.from_generator(g...
jiangwangyi
https://github.com/huggingface/datasets/issues/5793
null
false
1,683,473,943
5,791
TIFF/TIF support
closed
[ "The issue with multichannel TIFF images has already been reported in Pillow (https://github.com/python-pillow/Pillow/issues/1888). We can't do much about it on our side.\r\n\r\nStill, to avoid the error, you can bypass the default Pillow decoding and define a custom one as follows:\r\n```python\r\nimport tifffile ...
2023-04-25T16:14:18
2024-01-15T16:40:33
2024-01-15T16:40:16
### Feature request I currently have a dataset (with tiff and json files) where I have to do this: `wget path_to_data/images.zip && unzip images.zip` `wget path_to_data/annotations.zip && unzip annotations.zip` Would it make sense a contribution that supports these type of files? ### Motivation instead o...
sebasmos
https://github.com/huggingface/datasets/issues/5791
null
false
1,683,229,126
5,790
Allow to run CI on push to ci-branch
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-25T13:57:26
2023-04-26T13:43:08
2023-04-26T13:35:47
This PR allows to run the CI on push to a branch named "ci-*", without needing to open a PR. - This will allow to make CI tests without opening a PR, e.g., for future `huggingface-hub` releases, future dependency releases (like `fsspec`, `pandas`,...) Note that to build the documentation, we already allow it on pus...
albertvillanova
https://github.com/huggingface/datasets/pull/5790
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5790", "html_url": "https://github.com/huggingface/datasets/pull/5790", "diff_url": "https://github.com/huggingface/datasets/pull/5790.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5790.patch", "merged_at": "2023-04-26T13:35...
true
1,682,611,179
5,789
Support streaming datasets that use jsonlines
open
[]
2023-04-25T07:40:02
2023-04-25T07:40:03
null
Extend support for streaming datasets that use `jsonlines.open`. Currently, if `jsonlines` is installed, `datasets` raises a `FileNotFoundError`: ``` FileNotFoundError: [Errno 2] No such file or directory: 'https://...' ``` See: - https://huggingface.co/datasets/masakhane/afriqa/discussions/1
albertvillanova
https://github.com/huggingface/datasets/issues/5789
null
false
1,681,136,256
5,788
Prepare tests for hfh 0.14
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-24T12:13:03
2023-04-25T14:32:56
2023-04-25T14:25:30
Related to the coming release of `huggingface_hub==0.14.0`. It will break some internal tests. The PR fixes these tests. Let's double-check the CI but I expect the fixed tests to be running fine with both `hfh<=0.13.4` and `hfh==0.14`. Worth case scenario, existing PRs will have to be rebased once this fix is merged. ...
Wauplin
https://github.com/huggingface/datasets/pull/5788
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5788", "html_url": "https://github.com/huggingface/datasets/pull/5788", "diff_url": "https://github.com/huggingface/datasets/pull/5788.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5788.patch", "merged_at": "2023-04-25T14:25...
true
1,680,965,959
5,787
Fix inferring module for unsupported data files
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think you can revert the last commit - it should fail if data_files={} IMO", "The validation of non-empty data_files is addressed in this PR:\r\n- #5802", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<det...
2023-04-24T10:44:50
2023-04-27T13:06:01
2023-04-27T12:57:28
This PR raises a FileNotFoundError instead: ``` FileNotFoundError: No (supported) data files or dataset script found in <dataset_name> ``` Fix #5785.
albertvillanova
https://github.com/huggingface/datasets/pull/5787
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5787", "html_url": "https://github.com/huggingface/datasets/pull/5787", "diff_url": "https://github.com/huggingface/datasets/pull/5787.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5787.patch", "merged_at": "2023-04-27T12:57...
true
1,680,957,070
5,786
Multiprocessing in a `filter` or `map` function with a Pytorch model
closed
[ "Hi ! PyTorch may hang when calling `load_state_dict()` in a subprocess. To fix that, set the multiprocessing start method to \"spawn\". Since `datasets` uses `multiprocess`, you should do:\r\n\r\n```python\r\n# Required to avoid issues with pytorch (otherwise hangs during load_state_dict in multiprocessing)\r\nimp...
2023-04-24T10:38:07
2023-05-30T09:56:30
2023-04-24T10:43:58
### Describe the bug I am trying to use a Pytorch model loaded on CPUs with multiple processes with a `.map` or a `.filter` method. Usually, when dealing with models that are non-pickable, creating a class such that the `map` function is the method `__call__`, and adding `reduce` helps to solve the problem. Howe...
HugoLaurencon
https://github.com/huggingface/datasets/issues/5786
null
false
1,680,956,964
5,785
Unsupported data files raise TypeError: 'NoneType' object is not iterable
closed
[]
2023-04-24T10:38:03
2023-04-27T12:57:30
2023-04-27T12:57:30
Currently, we raise a TypeError for unsupported data files: ``` TypeError: 'NoneType' object is not iterable ``` See: - https://github.com/huggingface/datasets-server/issues/1073 We should give a more informative error message.
albertvillanova
https://github.com/huggingface/datasets/issues/5785
null
false
1,680,950,726
5,784
Raise subprocesses traceback when interrupting
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-24T10:34:03
2023-04-26T16:04:42
2023-04-26T15:54:44
When a subprocess hangs in `filter` or `map`, one should be able to get the subprocess' traceback when interrupting the main process. Right now it shows nothing. To do so I `.get()` the subprocesses async results even the main process is stopped with e.g. `KeyboardInterrupt`. I added a timeout in case the subprocess...
lhoestq
https://github.com/huggingface/datasets/pull/5784
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5784", "html_url": "https://github.com/huggingface/datasets/pull/5784", "diff_url": "https://github.com/huggingface/datasets/pull/5784.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5784.patch", "merged_at": "2023-04-26T15:54...
true
1,679,664,393
5,783
Offset overflow while doing regex on a text column
open
[ "Hi! This looks like an Arrow bug, but it can be avoided by reducing the `writer_batch_size`.\r\n\r\n(`ds = ds.map(get_text_caption, writer_batch_size=100)` in Colab runs without issues)\r\n", "@mariosasko I ran into this problem with load_dataset. What should I do", "@AisingioroHao0 You can also pass the `wri...
2023-04-22T19:12:03
2023-09-22T06:44:07
null
### Describe the bug `ArrowInvalid: offset overflow while concatenating arrays` Same error as [here](https://github.com/huggingface/datasets/issues/615) ### Steps to reproduce the bug Steps to reproduce: (dataset is a few GB big so try in colab maybe) ``` import datasets import re ds = datasets.lo...
nishanthcgit
https://github.com/huggingface/datasets/issues/5783
null
false
1,679,622,367
5,782
Support for various audio-loading backends instead of always relying on SoundFile
closed
[ "Hi! \r\n\r\nYou can use `set_transform`/`with_transform` to define a custom decoding for audio formats not supported by `soundfile`:\r\n```python\r\naudio_dataset_amr = Dataset.from_dict({\"audio\": [\"audio_samples/audio.amr\"]})\r\n\r\ndef decode_audio(batch):\r\n batch[\"audio\"] = [read_ffmpeg(audio_path) f...
2023-04-22T17:09:25
2023-05-10T20:23:04
2023-05-10T20:23:04
### Feature request Introduce an option to select from a variety of audio-loading backends rather than solely relying on the SoundFile library. For instance, if the ffmpeg library is installed, it can serve as a fallback loading option. ### Motivation - The SoundFile library, used in [features/audio.py](https://gith...
BoringDonut
https://github.com/huggingface/datasets/issues/5782
null
false
1,679,580,460
5,781
Error using `load_datasets`
closed
[ "It looks like an issue with your installation of scipy, can you try reinstalling it ?", "Sorry for the late reply, but that worked @lhoestq . Thanks for the assist." ]
2023-04-22T15:10:44
2023-05-02T23:41:25
2023-05-02T23:41:25
### Describe the bug I tried to load a dataset using the `datasets` library in a conda jupyter notebook and got the below error. ``` ImportError: dlopen(/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so, 0x0002): Library not ...
gjyoungjr
https://github.com/huggingface/datasets/issues/5781
null
false
1,679,367,149
5,780
TypeError: 'NoneType' object does not support item assignment
closed
[]
2023-04-22T06:22:43
2023-04-23T08:49:18
2023-04-23T08:49:18
command: ``` def load_datasets(formats, data_dir=datadir, data_files=datafile): dataset = load_dataset(formats, data_dir=datadir, data_files=datafile, split=split, streaming=True, **kwargs) return dataset raw_datasets = DatasetDict() raw_datasets["train"] = load_datasets(“csv”, args.datadir, "train.csv", s...
ben-8543
https://github.com/huggingface/datasets/issues/5780
null
false
1,678,669,865
5,779
Call fs.makedirs in save_to_disk
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-21T15:04:28
2023-04-26T12:20:01
2023-04-26T12:11:15
We need to call `fs.makedirs` when saving a dataset using `save_to_disk`, because some fs implementations have actual directories (S3 and others don't) Close https://github.com/huggingface/datasets/issues/5775
lhoestq
https://github.com/huggingface/datasets/pull/5779
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5779", "html_url": "https://github.com/huggingface/datasets/pull/5779", "diff_url": "https://github.com/huggingface/datasets/pull/5779.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5779.patch", "merged_at": "2023-04-26T12:11...
true
1,678,125,951
5,778
Schrödinger's dataset_dict
closed
[ "Hi ! Passing `data_files=\"path/test.json\"` is equivalent to `data_files={\"train\": [\"path/test.json\"]}`, that's why you end up with a train split. If you don't pass `data_files=`, then split names are inferred from the data files names" ]
2023-04-21T08:38:12
2023-07-24T15:15:14
2023-07-24T15:15:14
### Describe the bug If you use load_dataset('json', data_files="path/test.json"), it will return DatasetDict({train:...}). And if you use load_dataset("path"), it will return DatasetDict({test:...}). Why can't the output behavior be unified? ### Steps to reproduce the bug as description above. ### Expected b...
liujuncn
https://github.com/huggingface/datasets/issues/5778
null
false
1,677,655,969
5,777
datasets.load_dataset("code_search_net", "python") : NotADirectoryError: [Errno 20] Not a directory
closed
[ "Note:\r\nI listed the datasets and grepped around to find what appears to be an alternative source for this:\r\n\r\nraw_datasets = load_dataset(\"espejelomar/code_search_net_python_10000_examples\", \"python\")", "Thanks for reporting, @jason-brian-anderson.\r\n\r\nYes, this is a known issue: the [CodeSearchNet]...
2023-04-21T02:08:07
2023-06-05T05:49:52
2023-05-11T11:51:56
### Describe the bug While checking out the [tokenizer tutorial](https://huggingface.co/course/chapter6/2?fw=pt), i noticed getting an error while initially downloading the python dataset used in the examples. The [collab with the error is here](https://colab.research.google.com/github/huggingface/notebooks/blob/ma...
ghost
https://github.com/huggingface/datasets/issues/5777
null
false
1,677,116,100
5,776
Use Pandas' `read_json` in the JSON builder
open
[]
2023-04-20T17:15:49
2023-04-20T17:15:49
null
Instead of PyArrow's `read_json`, we should use `pd.read_json` in the JSON builder for consistency with the CSV and SQL builders (e.g., to address https://github.com/huggingface/datasets/issues/5725). In Pandas2.0, to get the same performance, we can set the `engine` to "pyarrow". The issue is that Colab still doesn...
mariosasko
https://github.com/huggingface/datasets/issues/5776
null
false
1,677,089,901
5,775
ArrowDataset.save_to_disk lost some logic of remote
closed
[ "We just fixed this on `main` and will do a new release soon :)" ]
2023-04-20T16:58:01
2023-04-26T12:11:36
2023-04-26T12:11:17
### Describe the bug https://github.com/huggingface/datasets/blob/e7ce0ac60c7efc10886471932854903a7c19f172/src/datasets/arrow_dataset.py#L1371 Here is the bug point, when I want to save from a `DatasetDict` class and the items of the instance is like `[('train', Dataset({features: ..., num_rows: ...}))]` , there ...
Zoupers
https://github.com/huggingface/datasets/issues/5775
null
false
1,676,716,662
5,774
Fix style
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-20T13:21:32
2023-04-20T13:34:26
2023-04-20T13:24:28
Fix C419 issues
lhoestq
https://github.com/huggingface/datasets/pull/5774
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5774", "html_url": "https://github.com/huggingface/datasets/pull/5774", "diff_url": "https://github.com/huggingface/datasets/pull/5774.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5774.patch", "merged_at": "2023-04-20T13:24...
true
1,675,984,633
5,773
train_dataset does not implement __len__
open
[ "Thanks for reporting, @v-yunbin.\r\n\r\nCould you please give more details, the steps to reproduce the bug, the complete error back trace and the environment information (`datasets-cli env`)?", "this is a detail error info from transformers:\r\n```\r\nTraceback (most recent call last):\r\n File \"finetune.py\",...
2023-04-20T04:37:05
2023-07-19T20:33:13
null
when train using data precessored by the datasets, I get follow warning and it leads to that I can not set epoch numbers: `ValueError: The train_dataset does not implement __len__, max_steps has to be specified. The number of steps needs to be known in advance for the learning rate scheduler.`
ben-8543
https://github.com/huggingface/datasets/issues/5773
null
false
1,675,033,510
5,772
Fix JSON builder when missing keys in first row
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-19T14:32:57
2023-04-21T06:45:13
2023-04-21T06:35:27
Until now, the JSON builder only considered the keys present in the first element of the list: - Either explicitly: by passing index 0 in `dataset[0].keys()` - Or implicitly: `pa.Table.from_pylist(dataset)`, where "schema (default None): If not passed, will be inferred from the first row of the mapping values" Thi...
albertvillanova
https://github.com/huggingface/datasets/pull/5772
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5772", "html_url": "https://github.com/huggingface/datasets/pull/5772", "diff_url": "https://github.com/huggingface/datasets/pull/5772.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5772.patch", "merged_at": "2023-04-21T06:35...
true
1,674,828,380
5,771
Support cloud storage for loading datasets
closed
[ "A duplicate of https://github.com/huggingface/datasets/issues/5281" ]
2023-04-19T12:43:53
2023-05-07T17:47:41
2023-05-07T17:47:41
### Feature request It seems that the the current implementation supports cloud storage only for `load_from_disk`. It would be nice if a similar functionality existed in `load_dataset`. ### Motivation Motivation is pretty clear -- let users work with datasets located in the cloud. ### Your contribution ...
eli-osherovich
https://github.com/huggingface/datasets/issues/5771
null
false
1,673,581,555
5,770
Add IterableDataset.from_spark
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi again @lhoestq this is ready for review! Not sure I have permission to add people to the reviewers list...", "Cool ! I think you can define `IterableDataset.from_spark` instead of adding `streaming=` in `Dataset.from_spark`, it ...
2023-04-18T17:47:53
2023-05-17T14:07:32
2023-05-17T14:00:38
Follow-up from https://github.com/huggingface/datasets/pull/5701 Related issue: https://github.com/huggingface/datasets/issues/5678
maddiedawson
https://github.com/huggingface/datasets/pull/5770
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5770", "html_url": "https://github.com/huggingface/datasets/pull/5770", "diff_url": "https://github.com/huggingface/datasets/pull/5770.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5770.patch", "merged_at": "2023-05-17T14:00...
true
1,673,441,182
5,769
Tiktoken tokenizers are not pickable
closed
[ "Thanks for reporting, @markovalexander.\r\n\r\nUnfortunately, I'm not able to reproduce the issue: the `tiktoken` tokenizer can be used within `Dataset.map`, both in my local machine and in a Colab notebook: https://colab.research.google.com/drive/1DhJroZgk0sNFJ2Mrz-jYgrmh9jblXaCG?usp=sharing\r\n\r\nAre you sure y...
2023-04-18T16:07:40
2023-05-04T18:55:57
2023-05-04T18:55:57
### Describe the bug Since tiktoken tokenizer is not pickable, it is not possible to use it inside `dataset.map()` with multiprocessing enabled. However, you [made](https://github.com/huggingface/datasets/issues/5536) tiktoken's tokenizers pickable in `datasets==2.10.0` for caching. For some reason, this logic does no...
markovalexander
https://github.com/huggingface/datasets/issues/5769
null
false
1,672,494,561
5,768
load_dataset("squad") doesn't work in 2.7.1 and 2.10.1
closed
[ "Thanks for reporting, @yaseen157.\r\n\r\nCould you please give the complete error stack trace?", "I am not able to reproduce your issue: the dataset loads perfectly on my local machine and on a Colab notebook: https://colab.research.google.com/drive/1Fbdoa1JdNz8DOdX6gmIsOK1nCT8Abj4O?usp=sharing\r\n```python\r\nI...
2023-04-18T07:10:56
2023-04-20T10:27:23
2023-04-20T10:27:22
### Describe the bug There is an issue that seems to be unique to the "squad" dataset, in which it cannot be loaded using standard methods. This issue is most quickly reproduced from the command line, using the HF examples to verify a dataset is loaded properly. This is not a problem with "squad_v2" dataset for e...
yaseen157
https://github.com/huggingface/datasets/issues/5768
null
false
1,672,433,979
5,767
How to use Distill-BERT with different datasets?
closed
[ "Closing this one in favor of the same issue opened in the `transformers` repo." ]
2023-04-18T06:25:12
2023-04-20T16:52:05
2023-04-20T16:52:05
### Describe the bug - `transformers` version: 4.11.3 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.12.0+cu102 (True) - Tensorflow version (GPU?): 2.10.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxL...
sauravtii
https://github.com/huggingface/datasets/issues/5767
null
false
1,671,485,882
5,766
Support custom feature types
open
[ "Hi ! Interesting :) What kind of new types would you like to use ?\r\n\r\nNote that you can already implement your own decoding by using `set_transform` that can decode data on-the-fly when rows are accessed", "An interesting proposal indeed. \r\n\r\nPandas and Polars have the \"extension API\", so doing somethi...
2023-04-17T15:46:41
2024-03-10T11:11:22
null
### Feature request I think it would be nice to allow registering custom feature types with the 🤗 Datasets library. For example, allow to do something along the following lines: ``` from datasets.features import register_feature_type # this would be a new function @register_feature_type class CustomFeature...
jmontalt
https://github.com/huggingface/datasets/issues/5766
null
false
1,671,388,824
5,765
ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['text']
open
[ "You need to remove the `text` and `text_en` columns before passing the dataset to the `DataLoader` to avoid this error:\r\n```python\r\ntokenized_datasets = tokenized_datasets.remove_columns([\"text\", \"text_en\"])\r\n```\r\n", "Thanks @mariosasko. Now I am getting this error:\r\n\r\n```\r\nTraceback (most rece...
2023-04-17T15:00:50
2023-04-25T13:50:45
null
### Describe the bug Following is my code that I am trying to run, but facing an error (have attached the whole error below): My code: ``` from collections import OrderedDict import warnings import flwr as fl import torch import numpy as np import random from torch.utils.data import DataLoader from...
sauravtii
https://github.com/huggingface/datasets/issues/5765
null
false
1,670,740,198
5,764
ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1
closed
[ "Thanks for reporting, @sauravtii.\r\n\r\nUnfortunately, I'm not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"josianem/imdb\")\r\n\r\nIn [2]: ds\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'label'],\r...
2023-04-17T09:08:18
2023-04-18T07:18:20
2023-04-18T07:18:20
### Describe the bug I want to use this (https://huggingface.co/datasets/josianem/imdb) dataset therefore I am trying to load it using the following code: ``` dataset = load_dataset("josianem/imdb") ``` The dataset is not getting loaded and gives the error message as the following: ``` Traceback (most rece...
sauravtii
https://github.com/huggingface/datasets/issues/5764
null
false
1,670,476,302
5,763
fix typo: "mow" -> "now"
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-17T06:03:44
2023-04-17T15:01:53
2023-04-17T14:54:46
I noticed a typo as I was reading the datasets documentation. This PR contains a trivial fix changing "mow" to "now."
csris
https://github.com/huggingface/datasets/pull/5763
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5763", "html_url": "https://github.com/huggingface/datasets/pull/5763", "diff_url": "https://github.com/huggingface/datasets/pull/5763.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5763.patch", "merged_at": "2023-04-17T14:54...
true
1,670,326,470
5,762
Not able to load the pile
closed
[ "Thanks for reporting, @surya-narayanan.\r\n\r\nI see you already started a discussion about this on the Community tab of the corresponding dataset: https://huggingface.co/datasets/EleutherAI/the_pile/discussions/10\r\nLet's continue the discussion there!" ]
2023-04-17T03:09:10
2023-04-17T09:37:27
2023-04-17T09:37:27
### Describe the bug Got this error when I am trying to load the pile dataset ``` TypeError: Couldn't cast array of type struct<file: string, id: string> to {'id': Value(dtype='string', id=None)} ``` ### Steps to reproduce the bug Please visit the following sample notebook https://colab.research.goo...
surya-narayanan
https://github.com/huggingface/datasets/issues/5762
null
false
1,670,034,582
5,761
One or several metadata.jsonl were found, but not in the same directory or in a parent directory
open
[ "Also, when generated from a zip archive, the dataset contains only a few images. In my case, 20 versus 2000+ contained in the archive. The generation from folders works as expected.", "Thanks for reporting, @blghtr.\r\n\r\nYou should include the `metadata.jsonl` in your ZIP archives, at the root level directory....
2023-04-16T16:21:55
2023-04-19T11:53:24
null
### Describe the bug An attempt to generate a dataset from a zip archive using imagefolder and metadata.jsonl does not lead to the expected result. Tried all possible locations of the json file: the file in the archive is ignored (generated dataset contains only images), the file next to the archive like [here](http...
blghtr
https://github.com/huggingface/datasets/issues/5761
null
false
1,670,028,072
5,760
Multi-image loading in Imagefolder dataset
open
[ "Supporting this could be useful (I remember a use-case for this on the Hub). Do you agree @polinaeterna? \r\n\r\nImplementing this should be possible if we iterate over metadata files and build image/audio file paths instead of iterating over image/audio files and looking for the corresponding entries in metadata ...
2023-04-16T16:01:05
2024-12-01T11:16:09
null
### Feature request Extend the `imagefolder` dataloading script to support loading multiple images per dataset entry. This only really makes sense if a metadata file is present. Currently you can use the following format (example `metadata.jsonl`: ``` {'file_name': 'path_to_image.png', 'metadata': ...} ... `...
vvvm23
https://github.com/huggingface/datasets/issues/5760
null
false
1,669,977,848
5,759
Can I load in list of list of dict format?
open
[ "Thanks for reporting, @LZY-the-boys.\r\n\r\nCould you please give more details about what is your intended dataset structure? What are the names of the columns and the value of each row?\r\n\r\nCurrently, the JSON-Lines format is supported:\r\n- Each line correspond to one row of the dataset\r\n- Each line is comp...
2023-04-16T13:50:14
2023-04-19T12:04:36
null
### Feature request my jsonl dataset has following format: ``` [{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...] [{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...] ``` I try to use `datasets.load_dataset('json', data_files=path)` or `datasets.Dataset.from_json`, it raises ``` File "site-p...
LZY-the-boys
https://github.com/huggingface/datasets/issues/5759
null
false
1,669,920,923
5,758
Fixes #5757
closed
[ "The CI can be fixed by merging `main` into your branch. Can you do that before we merge ?", "_The documentation is not available anymore as the PR was closed or merged._", "Done.\n\nOn Thu, Apr 20, 2023 at 6:01 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> The CI can be fixed by merging main into your branch. Ca...
2023-04-16T11:56:01
2023-04-20T15:37:49
2023-04-20T15:30:48
Fixes the bug #5757
eli-osherovich
https://github.com/huggingface/datasets/pull/5758
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5758", "html_url": "https://github.com/huggingface/datasets/pull/5758", "diff_url": "https://github.com/huggingface/datasets/pull/5758.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5758.patch", "merged_at": "2023-04-20T15:30...
true
1,669,910,503
5,757
Tilde (~) is not supported
closed
[]
2023-04-16T11:48:10
2023-04-20T15:30:51
2023-04-20T15:30:51
### Describe the bug It seems that `~` is not recognized correctly in local paths. Whenever I try to use it I get an exception ### Steps to reproduce the bug ```python load_dataset("imagefolder", data_dir="~/data/my_dataset") ``` Will generate the following error: ``` EmptyDatasetError: The directory at ...
eli-osherovich
https://github.com/huggingface/datasets/issues/5757
null
false
1,669,678,080
5,756
Calling shuffle on a IterableDataset with streaming=True, gives "ValueError: cannot reshape array"
closed
[ "Hi! I've merged a PR on the Hub with a fix: https://huggingface.co/datasets/fashion_mnist/discussions/3", "Thanks, this appears to have fixed the issue.\r\n\r\nI've created a PR for the same change in the mnist dataset: https://huggingface.co/datasets/mnist/discussions/3/files" ]
2023-04-16T04:59:47
2023-04-18T03:40:56
2023-04-18T03:40:56
### Describe the bug When calling shuffle on a IterableDataset with streaming=True, I get the following error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/administrator/Documents/Projects/huggingface/jax-diffusers-sprint-consistency-models/virtualenv/lib/python3.1...
rohfle
https://github.com/huggingface/datasets/issues/5756
null
false
1,669,048,438
5,755
ImportError: cannot import name 'DeprecatedEnum' from 'datasets.utils.deprecation_utils'
closed
[ "update the version. fix" ]
2023-04-14T23:28:54
2023-04-14T23:36:19
2023-04-14T23:36:19
### Describe the bug The module moved to new place? ### Steps to reproduce the bug in the import step, ```python from datasets.utils.deprecation_utils import DeprecatedEnum ``` error: ``` ImportError: cannot import name 'DeprecatedEnum' from 'datasets.utils.deprecation_utils' ``` ### Expected behavior...
fivejjs
https://github.com/huggingface/datasets/issues/5755
null
false
1,668,755,035
5,754
Minor tqdm fixes
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-14T18:15:14
2023-04-20T15:27:58
2023-04-20T15:21:00
`GeneratorBasedBuilder`'s TQDM bars were not used as context managers. This PR fixes that (missed these bars in https://github.com/huggingface/datasets/pull/5560). Also, this PR modifies the single-proc `save_to_disk` to fix the issue with the TQDM bar not accumulating the progress in the multi-shard setting (again...
mariosasko
https://github.com/huggingface/datasets/pull/5754
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5754", "html_url": "https://github.com/huggingface/datasets/pull/5754", "diff_url": "https://github.com/huggingface/datasets/pull/5754.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5754.patch", "merged_at": "2023-04-20T15:21...
true
1,668,659,536
5,753
[IterableDatasets] Add column followed by interleave datasets gives bogus outputs
closed
[ "Problem with the code snippet! Using global vars and functions was not a good idea with iterable datasets!\r\n\r\nIf we update to:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\noriginal_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# now add a new co...
2023-04-14T17:32:31
2025-07-04T05:22:53
2023-04-14T17:36:37
### Describe the bug If we add a new column to our iterable dataset using the hack described in #5752, when we then interleave datasets the new column is pinned to one value. ### Steps to reproduce the bug What we're going to do here is: 1. Load an iterable dataset in streaming mode (`original_dataset`) 2. A...
sanchit-gandhi
https://github.com/huggingface/datasets/issues/5753
null
false
1,668,574,209
5,752
Streaming dataset looses `.feature` method after `.add_column`
open
[ "I believe the issue resides in this line:\r\nhttps://github.com/huggingface/datasets/blob/7c3a9b057c476c40d157bd7a5d57f49066239df0/src/datasets/iterable_dataset.py#L1415\r\n\r\nIf we pass the **new** features of the dataset to the `.map` method we can return the features after adding a column, e.g.:\r\n```python\r...
2023-04-14T16:39:50
2024-01-18T10:15:20
null
### Describe the bug After appending a new column to a streaming dataset using `.add_column`, we can no longer access the list of dataset features using the `.feature` method. ### Steps to reproduce the bug ```python from datasets import load_dataset original_dataset = load_dataset("librispeech_asr", "clean", sp...
sanchit-gandhi
https://github.com/huggingface/datasets/issues/5752
null
false
1,668,333,316
5,751
Consistent ArrayXD Python formatting + better NumPy/Pandas formatting
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-14T14:13:59
2023-04-20T14:43:20
2023-04-20T14:40:34
Return a list of lists instead of a list of NumPy arrays when converting the variable-shaped `ArrayXD` to Python. Additionally, improve the NumPy conversion by returning a numeric NumPy array when the offsets are equal or a NumPy object array when they aren't, and allow converting the variable-shaped `ArrayXD` to Panda...
mariosasko
https://github.com/huggingface/datasets/pull/5751
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5751", "html_url": "https://github.com/huggingface/datasets/pull/5751", "diff_url": "https://github.com/huggingface/datasets/pull/5751.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5751.patch", "merged_at": "2023-04-20T14:40...
true
1,668,289,067
5,750
Fail to create datasets from a generator when using Google Big Query
closed
[ "`from_generator` expects a generator function, not a generator object, so this should work:\r\n```python\r\nfrom datasets import Dataset\r\nfrom google.cloud import bigquery\r\n\r\nclient = bigquery.Client()\r\n\r\ndef gen()\r\n # Perform a query.\r\n QUERY = (\r\n 'SELECT name FROM `bigquery-public-d...
2023-04-14T13:50:59
2023-04-17T12:20:43
2023-04-17T12:20:43
### Describe the bug Creating a dataset from a generator using `Dataset.from_generator()` fails if the generator is the [Google Big Query Python client](https://cloud.google.com/python/docs/reference/bigquery/latest). The problem is that the Big Query client is not pickable. And the function `create_config_id` tries t...
ivanprado
https://github.com/huggingface/datasets/issues/5750
null
false
1,668,016,321
5,749
AttributeError: 'Version' object has no attribute 'match'
closed
[ "I got the same error, and the official website for visual genome is down. Did you solve this problem? ", "I am in the same situation now :( ", "Thanks for reporting, @gulnaz-zh.\r\n\r\nI am investigating it.", "The host server is down: https://visualgenome.org/\r\n\r\nWe are contacting the dataset authors.",...
2023-04-14T10:48:06
2023-06-30T11:31:17
2023-04-18T12:57:08
### Describe the bug When I run from datasets import load_dataset data = load_dataset("visual_genome", 'region_descriptions_v1.2.0') AttributeError: 'Version' object has no attribute 'match' ### Steps to reproduce the bug from datasets import load_dataset data = load_dataset("visual_genome", 'region_descripti...
gulnaz-zh
https://github.com/huggingface/datasets/issues/5749
null
false
1,667,517,024
5,748
[BUG FIX] Issue 5739
open
[]
2023-04-14T05:07:31
2023-04-14T05:07:31
null
A fix for https://github.com/huggingface/datasets/issues/5739
airlsyn
https://github.com/huggingface/datasets/pull/5748
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5748", "html_url": "https://github.com/huggingface/datasets/pull/5748", "diff_url": "https://github.com/huggingface/datasets/pull/5748.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5748.patch", "merged_at": null }
true
1,667,270,412
5,747
[WIP] Add Dataset.to_spark
closed
[]
2023-04-13T23:20:03
2024-01-08T18:31:50
2024-01-08T18:31:50
null
maddiedawson
https://github.com/huggingface/datasets/pull/5747
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5747", "html_url": "https://github.com/huggingface/datasets/pull/5747", "diff_url": "https://github.com/huggingface/datasets/pull/5747.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5747.patch", "merged_at": null }
true
1,667,102,459
5,746
Fix link in docs
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-13T20:45:19
2023-04-14T13:15:38
2023-04-14T13:08:42
Fixes a broken link in the use_with_pytorch docs
bbbxyz
https://github.com/huggingface/datasets/pull/5746
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5746", "html_url": "https://github.com/huggingface/datasets/pull/5746", "diff_url": "https://github.com/huggingface/datasets/pull/5746.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5746.patch", "merged_at": "2023-04-14T13:08...
true
1,667,086,143
5,745
[BUG FIX] Issue 5744
open
[ "Have met the same problem with datasets==2.8.0, pandas==2.0.0. It could be solved by installing the latest version of datasets or using datasets==2.8.0, pandas==1.5.3.", "Pandas 2.0.0 has removed support to passing `mangle_dupe_cols`.\r\n\r\nHowever, our `datasets` library does not use this parameter: it only pa...
2023-04-13T20:29:55
2023-04-21T15:22:43
null
A temporal fix for https://github.com/huggingface/datasets/issues/5744.
keyboardAnt
https://github.com/huggingface/datasets/pull/5745
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5745", "html_url": "https://github.com/huggingface/datasets/pull/5745", "diff_url": "https://github.com/huggingface/datasets/pull/5745.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5745.patch", "merged_at": null }
true
1,667,076,620
5,744
[BUG] With Pandas 2.0.0, `load_dataset` raises `TypeError: read_csv() got an unexpected keyword argument 'mangle_dupe_cols'`
closed
[ "Thanks for reporting, @keyboardAnt.\r\n\r\nWe haven't noticed any crash in our CI tests. Could you please indicate specifically the `load_dataset` command that crashes in your side, so that we can reproduce it?", "This has been fixed in `datasets` 2.11", "I am still getting this bug with the latest pandas and ...
2023-04-13T20:21:28
2024-04-09T16:13:59
2023-07-06T17:01:59
The `load_dataset` function with Pandas `1.5.3` has no issue (just a FutureWarning) but crashes with Pandas `2.0.0`. For your convenience, I opened a draft Pull Request to fix it quickly: https://github.com/huggingface/datasets/pull/5745 --- * The FutureWarning mentioned above: ``` FutureWarning: the 'mangle_...
keyboardAnt
https://github.com/huggingface/datasets/issues/5744
null
false
1,666,843,832
5,743
dataclass.py in virtual environment is overriding the stdlib module "dataclasses"
closed
[ "We no longer depend on `dataclasses` (for almost a year), so I don't think our package is the problematic one. \r\n\r\nI think it makes more sense to raise this issue in the `dataclasses` repo: https://github.com/ericvsmith/dataclasses." ]
2023-04-13T17:28:33
2023-04-17T12:23:18
2023-04-17T12:23:18
### Describe the bug "e:\Krish_naik\FSDSRegression\venv\Lib\dataclasses.py" is overriding the stdlib module "dataclasses" ### Steps to reproduce the bug module issue ### Expected behavior overriding the stdlib module "dataclasses" ### Environment info VS code
syedabdullahhassan
https://github.com/huggingface/datasets/issues/5743
null
false
1,666,209,738
5,742
Warning specifying future change in to_tf_dataset behaviour
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-13T11:10:00
2023-04-21T13:18:14
2023-04-21T13:11:09
Warning specifying future changes happening to `to_tf_dataset` behaviour when #5602 is merged in
amyeroberts
https://github.com/huggingface/datasets/pull/5742
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5742", "html_url": "https://github.com/huggingface/datasets/pull/5742", "diff_url": "https://github.com/huggingface/datasets/pull/5742.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5742.patch", "merged_at": "2023-04-21T13:11...
true
1,665,860,919
5,741
Fix CI warnings
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-13T07:17:02
2023-04-13T09:48:10
2023-04-13T09:40:50
Fix warnings in our CI tests.
albertvillanova
https://github.com/huggingface/datasets/pull/5741
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5741", "html_url": "https://github.com/huggingface/datasets/pull/5741", "diff_url": "https://github.com/huggingface/datasets/pull/5741.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5741.patch", "merged_at": "2023-04-13T09:40...
true
1,664,132,130
5,740
Fix CI mock filesystem fixtures
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-04-12T08:52:35
2023-04-13T11:01:24
2023-04-13T10:54:13
This PR fixes the fixtures of our CI mock filesystems. Before, we had to pass `clobber=True` to `fsspec.register_implementation` to overwrite the still present previously added "mock" filesystem. That meant that the mock filesystem fixture was not working properly, because the previously added "mock" filesystem, sho...
albertvillanova
https://github.com/huggingface/datasets/pull/5740
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5740", "html_url": "https://github.com/huggingface/datasets/pull/5740", "diff_url": "https://github.com/huggingface/datasets/pull/5740.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5740.patch", "merged_at": "2023-04-13T10:54...
true
1,663,762,901
5,739
weird result during dataset split when data path starts with `/data`
open
[ "Same problem.", "hi! \r\nI think you can run python from `/data/train/raw/` directory and load dataset as `load_dataset(\"code_contests\")` to mitigate this issue as a workaround. \r\n@ericxsun Do you want to open a PR to fix the regex? As you already found the solution :) ", "> hi! I think you can run python ...
2023-04-12T04:51:35
2023-04-21T14:20:59
null
### Describe the bug The regex defined here https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/utils/py_utils.py#L158 will cause a weird result during dataset split when data path starts with `/data` ### Steps to reproduce the bug 1. clone dataset into local path ...
airlsyn
https://github.com/huggingface/datasets/issues/5739
null
false
1,663,477,690
5,738
load_dataset("text","dataset.txt") loads the wrong dataset!
closed
[ "You need to provide a text file as `data_files`, not as a configuration:\r\n\r\n```python\r\nmy_dataset = load_dataset(\"text\", data_files=\"TextFile.txt\")\r\n```\r\n\r\nOtherwise, since `data_files` is `None`, it picks up Colab's sample datasets from the `content` dir." ]
2023-04-12T01:07:46
2023-04-19T12:08:27
2023-04-19T12:08:27
### Describe the bug I am trying to load my own custom text dataset using the load_dataset function. My dataset is a bunch of ordered text, think along the lines of shakespeare plays. However, after I load the dataset and I inspect it, the dataset is a table with a bunch of latitude and longitude values! What in th...
Tylersuard
https://github.com/huggingface/datasets/issues/5738
null
false
1,662,919,811
5,737
ClassLabel Error
closed
[ "Hi, you can use the `cast_column` function to change the feature type from a `Value(int64)` to `ClassLabel`:\r\n\r\n```py\r\ndataset = dataset.cast_column(\"label\", ClassLabel(names=[\"label_1\", \"label_2\", \"label_3\"]))\r\nprint(dataset.features)\r\n{'text': Value(dtype='string', id=None),\r\n 'label': ClassL...
2023-04-11T17:14:13
2023-04-13T16:49:57
2023-04-13T16:49:57
### Describe the bug I still getting the error "call() takes 1 positional argument but 2 were given" even after ensuring that the value being passed to the label object is a single value and that the ClassLabel object has been created with the correct number of label classes ### Steps to reproduce the bug from...
mrcaelumn
https://github.com/huggingface/datasets/issues/5737
null
false
1,662,286,061
5,736
FORCE_REDOWNLOAD raises "Directory not empty" exception on second run
open
[ "Hi ! I couldn't reproduce your issue :/\r\n\r\nIt seems that `shutil.rmtree` failed. It is supposed to work even if the directory is not empty, but you still end up with `OSError: [Errno 39] Directory not empty:`. Can you make sure another process is not using this directory at the same time ?", "I have the same...
2023-04-11T11:29:15
2023-11-30T07:16:58
null
### Describe the bug Running `load_dataset(..., download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` twice raises a `Directory not empty` exception on the second run. ### Steps to reproduce the bug I cannot test this on datasets v2.11.0 due to #5711, but this happens in v2.10.1. 1. Set up a script `my_dataset.p...
rcasero
https://github.com/huggingface/datasets/issues/5736
null
false
1,662,150,903
5,735
Implement sharding on merged iterable datasets
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi ! What if one of the sub-iterables only has one shard ? In that case I don't think we'd end up with a correctly interleaved dataset, since only rank 0 would yield examples from this sub-iterable", "Hi ! \r\nI just tested this ou...
2023-04-11T10:02:25
2023-04-27T16:39:04
2023-04-27T16:32:09
This PR allows sharding of merged iterable datasets. Merged iterable datasets with for instance the `interleave_datasets` command are comprised of multiple sub-iterable, one for each dataset that has been merged. With this PR, sharding a merged iterable will result in multiple merged datasets each comprised of sh...
bruno-hays
https://github.com/huggingface/datasets/pull/5735
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5735", "html_url": "https://github.com/huggingface/datasets/pull/5735", "diff_url": "https://github.com/huggingface/datasets/pull/5735.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5735.patch", "merged_at": "2023-04-27T16:32...
true
1,662,058,028
5,734
Remove temporary pin of fsspec
closed
[]
2023-04-11T09:04:17
2023-04-11T11:04:52
2023-04-11T11:04:52
Once root cause is found and fixed, remove the temporary pin introduced by: - #5731
albertvillanova
https://github.com/huggingface/datasets/issues/5734
null
false
1,662,039,191
5,733
Unpin fsspec
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-11T08:52:12
2023-04-11T11:11:45
2023-04-11T11:04:51
In `fsspec--2023.4.0` default value for clobber when registering an implementation was changed from True to False. See: - https://github.com/fsspec/filesystem_spec/pull/1237 This PR recovers previous behavior by passing clobber True when registering mock implementations. This PR also removes the temporary pin in...
albertvillanova
https://github.com/huggingface/datasets/pull/5733
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5733", "html_url": "https://github.com/huggingface/datasets/pull/5733", "diff_url": "https://github.com/huggingface/datasets/pull/5733.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5733.patch", "merged_at": "2023-04-11T11:04...
true
1,662,020,571
5,732
Enwik8 should support the standard split
closed
[ "#self-assign", "The Enwik8 pipeline is not present in this codebase, and is hosted elsewhere. I have opened a PR [there](https://huggingface.co/datasets/enwik8/discussions/4) instead. " ]
2023-04-11T08:38:53
2023-04-11T09:28:17
2023-04-11T09:28:16
### Feature request The HuggingFace Datasets library currently supports two BuilderConfigs for Enwik8. One config yields individual lines as examples, while the other config yields the entire dataset as a single example. Both support only a monolithic split: it is all grouped as "train". The HuggingFace Datasets l...
lucaslingle
https://github.com/huggingface/datasets/issues/5732
null
false
1,662,012,913
5,731
Temporarily pin fsspec
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-04-11T08:33:15
2023-04-11T08:57:45
2023-04-11T08:47:55
Fix #5730.
albertvillanova
https://github.com/huggingface/datasets/pull/5731
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5731", "html_url": "https://github.com/huggingface/datasets/pull/5731", "diff_url": "https://github.com/huggingface/datasets/pull/5731.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5731.patch", "merged_at": "2023-04-11T08:47...
true
1,662,007,926
5,730
CI is broken: ValueError: Name (mock) already in the registry and clobber is False
closed
[]
2023-04-11T08:29:46
2023-04-11T08:47:56
2023-04-11T08:47:56
CI is broken for `test_py310`. See: https://github.com/huggingface/datasets/actions/runs/4665326892/jobs/8258580948 ``` =========================== short test summary info ============================ ERROR tests/test_builder.py::test_builder_with_filesystem_download_and_prepare - ValueError: Name (mock) already ...
albertvillanova
https://github.com/huggingface/datasets/issues/5730
null
false
1,661,929,923
5,729
Fix nondeterministic sharded data split order
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "The error in the CI was unrelated to this PR. I have merged main branch once that has been fixed.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### B...
2023-04-11T07:34:20
2023-04-26T15:12:25
2023-04-26T15:05:12
This PR makes the order of the split names deterministic. Before it was nondeterministic because we were iterating over `set` elements. Fix #5728.
albertvillanova
https://github.com/huggingface/datasets/pull/5729
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5729", "html_url": "https://github.com/huggingface/datasets/pull/5729", "diff_url": "https://github.com/huggingface/datasets/pull/5729.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5729.patch", "merged_at": "2023-04-26T15:05...
true
1,661,925,932
5,728
The order of data split names is nondeterministic
closed
[]
2023-04-11T07:31:25
2023-04-26T15:05:13
2023-04-26T15:05:13
After this CI error: https://github.com/huggingface/datasets/actions/runs/4639528358/jobs/8210492953?pr=5718 ``` FAILED tests/test_data_files.py::test_get_data_files_patterns[data_file_per_split4] - AssertionError: assert ['random', 'train'] == ['train', 'random'] At index 0 diff: 'random' != 'train' Full diff:...
albertvillanova
https://github.com/huggingface/datasets/issues/5728
null
false
1,661,536,363
5,727
load_dataset fails with FileNotFound error on Windows
closed
[ "Hi! Can you please paste the entire error stack trace, not only the last few lines?", "`----> 1 dataset = datasets.load_dataset(\"glue\", \"ax\")\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1767, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, ...
2023-04-10T23:21:12
2023-07-21T14:08:20
2023-07-21T14:08:19
### Describe the bug Although I can import and run the datasets library in a Colab environment, I cannot successfully load any data on my own machine (Windows 10) despite following the install steps: (1) create conda environment (2) activate environment (3) install with: ``conda` install -c huggingface -c conda-...
joelkowalewski
https://github.com/huggingface/datasets/issues/5727
null
false
1,660,944,807
5,726
Fallback JSON Dataset loading does not load all values when features specified manually
closed
[ "Thanks for reporting, @myluki2000.\r\n\r\nI am working on a fix." ]
2023-04-10T15:22:14
2023-04-21T06:35:28
2023-04-21T06:35:28
### Describe the bug The fallback JSON dataset loader located here: https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L130-L153 does not load the values of features correctly when features are specified manually and not all features...
myluki2000
https://github.com/huggingface/datasets/issues/5726
null
false
1,660,455,202
5,725
How to limit the number of examples in dataset, for testing?
closed
[ "Hi! You can use the `nrows` parameter for this:\r\n```python\r\ndata = load_dataset(\"json\", data_files=data_path, nrows=10)\r\n```", "@mariosasko I get:\r\n\r\n`TypeError: __init__() got an unexpected keyword argument 'nrows'`", "I misread the format in which the dataset is stored - the `nrows` parameter wo...
2023-04-10T08:41:43
2023-04-21T06:16:24
2023-04-21T06:16:24
### Describe the bug I am using this command: `data = load_dataset("json", data_files=data_path)` However, I want to add a parameter, to limit the number of loaded examples to be 10, for development purposes, but can't find this simple parameter. ### Steps to reproduce the bug In the description. ### Expected beh...
ndvbd
https://github.com/huggingface/datasets/issues/5725
null
false
1,659,938,135
5,724
Error after shuffling streaming IterableDatasets with downloaded dataset
closed
[ "Moving `\"en\"` to the end of the path instead of passing it as a config name should fix the error:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('/path/to/your/data/dir/en', streaming=True, split='train')\r\ndataset = dataset.shuffle(buffer_size=10_000, seed=42)\r\nnext(iter(dataset))\r\n```\...
2023-04-09T16:58:44
2023-04-20T20:37:30
2023-04-20T20:37:30
### Describe the bug I downloaded the C4 dataset, and used streaming IterableDatasets to read it. Everything went normal until I used `dataset = dataset.shuffle(seed=42, buffer_size=10_000)` to shuffle the dataset. Shuffled dataset will throw the following error when it is used by `next(iter(dataset))`: ``` File "/d...
szxiangjn
https://github.com/huggingface/datasets/issues/5724
null
false