id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
1,577,590,611
5,515
Unify `load_from_cache_file` type and logic
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "The commit also includes the changes to the `DatasetDict` methods or am I missing something?", "Oh, indeed. Feel free to mark the PR as \"Ready for review\" then.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0...
2023-02-09T10:04:46
2023-02-14T15:38:13
2023-02-14T14:26:42
* Updating type annotations for #`load_from_cache_file` * Added logic for cache checking if needed * Updated documentation following the wording of `Dataset.map`
HallerPatrick
https://github.com/huggingface/datasets/pull/5515
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5515", "html_url": "https://github.com/huggingface/datasets/pull/5515", "diff_url": "https://github.com/huggingface/datasets/pull/5515.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5515.patch", "merged_at": "2023-02-14T14:26...
true
1,576,453,837
5,514
Improve inconsistency of `Dataset.map` interface for `load_from_cache_file`
closed
[ "Hi, thanks for noticing this! We can't just remove the cache control as this allows us to control where the arrow files generated by the ops are written (cached on disk if enabled or a temporary directory if disabled). The right way to address this inconsistency would be by having `load_from_cache_file=None` by de...
2023-02-08T16:40:44
2023-02-14T14:26:44
2023-02-14T14:26:44
### Feature request 1. Replace the `load_from_cache_file` default value to `True`. 2. Remove or alter checks from `is_caching_enabled` logic. ### Motivation I stumbled over an inconsistency in the `Dataset.map` interface. The documentation (and source) states for the parameter `load_from_cache_file`: ``` load_...
HallerPatrick
https://github.com/huggingface/datasets/issues/5514
null
false
1,576,300,803
5,513
Some functions use a param named `type` shouldn't that be avoided since it's a Python reserved name?
closed
[ "Hi! Let's not do this - renaming it would be a breaking change, and going through the deprecation cycle is only worth it if it improves user experience.", "Hi @mariosasko, ok it makes sense. Anyway, don't you think it's worth it at some point to start a deprecation cycle e.g. `fs` in `load_from_disk`? It doesn't...
2023-02-08T15:13:46
2023-07-24T16:02:18
2023-07-24T14:27:59
Hi @mariosasko, @lhoestq, or whoever reads this! :) After going through `ArrowDataset.set_format` I found out that the `type` param is actually named `type` which is a Python reserved name as you may already know, shouldn't that be renamed to `format_type` before the 3.0.0 is released? Just wanted to get your inp...
alvarobartt
https://github.com/huggingface/datasets/issues/5513
null
false
1,576,142,432
5,512
Speed up batched PyTorch DataLoader
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-02-08T13:38:59
2023-02-19T18:35:09
2023-02-19T18:27:29
I implemented `__getitems__` to speed up batched data loading in PyTorch close https://github.com/huggingface/datasets/issues/5505
lhoestq
https://github.com/huggingface/datasets/pull/5512
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5512", "html_url": "https://github.com/huggingface/datasets/pull/5512", "diff_url": "https://github.com/huggingface/datasets/pull/5512.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5512.patch", "merged_at": "2023-02-19T18:27...
true
1,575,851,768
5,511
Creating a dummy dataset from a bigger one
closed
[ "Update `datasets` or downgrade `huggingface-hub` ;)\r\n\r\nThe `huggingface-hub` lib did a breaking change a few months ago, and you're using an old version of `datasets` that does't support it", "Awesome thanks a lot! Everything works just fine with `datasets==2.9.0` :-) ", "Getting same error with latest ver...
2023-02-08T10:18:41
2023-12-28T18:21:01
2023-02-08T10:35:48
### Describe the bug I often want to create a dummy dataset from a bigger dataset for fast iteration when training. However, I'm having a hard time doing this especially when trying to upload the dataset to the Hub. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset...
patrickvonplaten
https://github.com/huggingface/datasets/issues/5511
null
false
1,575,191,549
5,510
Milvus integration for search
open
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5510). All of your documentation changes will be reflected on that endpoint.", "To the maintainer, sorry about the repeated run requests for formatting. Missed the `make style` outlined in contributing guidelines. ", "Anythin...
2023-02-07T23:30:26
2023-02-24T16:45:09
null
Signed-off-by: Filip Haltmayer <[email protected]>
filip-halt
https://github.com/huggingface/datasets/pull/5510
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5510", "html_url": "https://github.com/huggingface/datasets/pull/5510", "diff_url": "https://github.com/huggingface/datasets/pull/5510.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5510.patch", "merged_at": null }
true
1,574,177,320
5,509
Add a static `__all__` to `__init__.py` for typecheckers
open
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5509). All of your documentation changes will be reflected on that endpoint.", "Hi! I've commented on the original issue to provide some context. Feel free to share your opinion there." ]
2023-02-07T11:42:40
2023-02-08T17:48:24
null
This adds a static `__all__` field to `__init__.py`, allowing typecheckers to know which symbols are accessible from `datasets` at runtime. In particular [Pyright](https://github.com/microsoft/pylance-release/issues/2328#issuecomment-1029381258) seems to rely on this. At this point I have added all (modulo oversight) t...
LoicGrobol
https://github.com/huggingface/datasets/pull/5509
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5509", "html_url": "https://github.com/huggingface/datasets/pull/5509", "diff_url": "https://github.com/huggingface/datasets/pull/5509.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5509.patch", "merged_at": null }
true
1,573,290,359
5,508
Saving a dataset after setting format to torch doesn't work, but only if filtering
closed
[ "Hey, I'm a research engineer working on language modelling wanting to contribute to open source. I was wondering if I could give it a shot?", "Hi! This issue was fixed in https://github.com/huggingface/datasets/pull/4972, so please install `datasets>=2.5.0` to avoid it." ]
2023-02-06T21:08:58
2023-02-09T14:55:26
2023-02-09T14:55:26
### Describe the bug Saving a dataset after setting format to torch doesn't work, but only if filtering ### Steps to reproduce the bug ``` a = Dataset.from_dict({"b": [1, 2]}) a.set_format('torch') a.save_to_disk("test_save") # saves successfully a.filter(None).save_to_disk("test_save_filter") # does not >> [.....
joebhakim
https://github.com/huggingface/datasets/issues/5508
null
false
1,572,667,036
5,507
Optimise behaviour in respect to indices mapping
open
[]
2023-02-06T14:25:55
2023-02-28T18:19:18
null
_Originally [posted](https://huggingface.slack.com/archives/C02V51Q3800/p1675443873878489?thread_ts=1675418893.373479&cid=C02V51Q3800) on Slack_ Considering all this, perhaps for Datasets 3.0, we can do the following: * [ ] have `continuous=True` by default in `.shard` (requested in the survey and makes more sense...
mariosasko
https://github.com/huggingface/datasets/issues/5507
null
false
1,571,838,641
5,506
IterableDataset and Dataset return different batch sizes when using Trainer with multiple GPUs
closed
[ "Hi ! `datasets` doesn't do batching - the PyTorch DataLoader does and is created by the `Trainer`. Do you pass other arguments to training_args with respect to data loading ?\r\n\r\nAlso we recently released `.to_iterable_dataset` that does pretty much what you implemented, but using contiguous shards to get a bet...
2023-02-06T03:26:03
2023-02-08T18:30:08
2023-02-08T18:30:07
### Describe the bug I am training a Roberta model using 2 GPUs and the `Trainer` API with a batch size of 256. Initially I used a standard `Dataset`, but had issues with slow data loading. After reading [this issue](https://github.com/huggingface/datasets/issues/2252), I swapped to loading my dataset as contiguous...
kheyer
https://github.com/huggingface/datasets/issues/5506
null
false
1,571,720,814
5,505
PyTorch BatchSampler still loads from Dataset one-by-one
closed
[ "This change seems to come from a few months ago in the PyTorch side. That's good news and it means we may not need to pass a batch_sampler as soon as we add `Dataset.__getitems__` to get the optimal speed :)\r\n\r\nThanks for reporting ! Would you like to open a PR to add `__getitems__` and remove this outdated do...
2023-02-06T01:14:55
2023-02-19T18:27:30
2023-02-19T18:27:30
### Describe the bug In [the docs here](https://huggingface.co/docs/datasets/use_with_pytorch#use-a-batchsampler), it mentions the issue of the Dataset being read one-by-one, then states that using a BatchSampler resolves the issue. I'm not sure if this is a mistake in the docs or the code, but it seems that the on...
davidgilbertson
https://github.com/huggingface/datasets/issues/5505
null
false
1,570,621,242
5,504
don't zero copy timestamps
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-02-03T23:39:04
2023-02-08T17:28:50
2023-02-08T14:33:17
Fixes https://github.com/huggingface/datasets/issues/5495 I'm not sure whether we prefer a test here or if timestamps are known to be unsupported (like booleans). The current test at least covers the bug
dwyatte
https://github.com/huggingface/datasets/pull/5504
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5504", "html_url": "https://github.com/huggingface/datasets/pull/5504", "diff_url": "https://github.com/huggingface/datasets/pull/5504.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5504.patch", "merged_at": "2023-02-08T14:33...
true
1,570,091,225
5,502
Added functionality: sort datasets by multiple keys
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Thanks! I've left some comments.\r\n> \r\n> We should also add some tests, mainly to make sure `reverse` behaves as expected. Let me know if you need help with that.\r\n\r\nThanks for the offer! I couldn't find any guidelines on ho...
2023-02-03T16:17:00
2023-02-21T14:46:49
2023-02-21T14:39:23
Added functionality implementation: sort datasets by multiple keys/columns as discussed in https://github.com/huggingface/datasets/issues/5425.
MichlF
https://github.com/huggingface/datasets/pull/5502
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5502", "html_url": "https://github.com/huggingface/datasets/pull/5502", "diff_url": "https://github.com/huggingface/datasets/pull/5502.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5502.patch", "merged_at": "2023-02-21T14:39...
true
1,569,644,159
5,501
Increase chunk size for speeding up file downloads
open
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5501). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma...
2023-02-03T10:50:10
2023-02-09T11:04:11
null
Original fix: https://github.com/huggingface/huggingface_hub/pull/1267 Not sure this function is actually still called though. I haven't done benches on this. Is there a dataset where files are hosted on the hub through cloudfront so we can have the same setup as in `hf_hub` ?
Narsil
https://github.com/huggingface/datasets/pull/5501
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5501", "html_url": "https://github.com/huggingface/datasets/pull/5501", "diff_url": "https://github.com/huggingface/datasets/pull/5501.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5501.patch", "merged_at": null }
true
1,569,257,240
5,500
WMT19 custom download checksum error
closed
[ "I update the `datatsets` version and it works." ]
2023-02-03T05:45:37
2023-02-03T05:52:56
2023-02-03T05:52:56
### Describe the bug I use the following scripts to download data from WMT19: ```python import datasets from datasets import inspect_dataset, load_dataset_builder from wmt19.wmt_utils import _TRAIN_SUBSETS,_DEV_SUBSETS ## this is a must due to: https://discuss.huggingface.co/t/load-dataset-hangs-with-local-fi...
Hannibal046
https://github.com/huggingface/datasets/issues/5500
null
false
1,568,937,026
5,499
`load_dataset` has ~4 seconds of overhead for cached data
open
[ "Hi ! To skip the verification step that checks if newer data exist, you can enable offline mode with `HF_DATASETS_OFFLINE=1`.\r\n\r\nAlthough I agree this step should be much faster for datasets hosted on the HF Hub - we could just compare the commit hash from the local data and the remote git repository. We're no...
2023-02-02T23:34:50
2023-02-07T19:35:11
null
### Feature request When loading a dataset that has been cached locally, the `load_dataset` function takes a lot longer than it should take to fetch the dataset from disk (or memory). This is particularly noticeable for smaller datasets. For example, wikitext-2, comparing `load_data` (once cached) and `load_from_disk...
davidgilbertson
https://github.com/huggingface/datasets/issues/5499
null
false
1,568,190,529
5,498
TypeError: 'bool' object is not iterable when filtering a datasets.arrow_dataset.Dataset
closed
[ "Hi! Instead of a single boolean, your filter function should return an iterable (of booleans) in the batched mode like so:\r\n```python\r\ntrain_dataset = train_dataset.filter(\r\n function=lambda batch: [image is not None for image in batch[\"image\"]], \r\n batched=True,\r\n batc...
2023-02-02T14:46:49
2023-10-08T06:12:47
2023-02-04T17:19:36
### Describe the bug Hi, Thanks for the amazing work on the library! **Describe the bug** I think I might have noticed a small bug in the filter method. Having loaded a dataset using `load_dataset`, when I try to filter out empty entries with `batched=True`, I get a TypeError. ### Steps to reproduce the ...
vmuel
https://github.com/huggingface/datasets/issues/5498
null
false
1,567,601,264
5,497
Improved error message for gated/private repos
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-02-02T08:56:15
2023-02-02T11:26:08
2023-02-02T11:17:15
Using `use_auth_token=True` is not needed anymore. If a user logged in, the token will be automatically retrieved. Also include a mention for gated repos See https://github.com/huggingface/huggingface_hub/pull/1064
osanseviero
https://github.com/huggingface/datasets/pull/5497
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5497", "html_url": "https://github.com/huggingface/datasets/pull/5497", "diff_url": "https://github.com/huggingface/datasets/pull/5497.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5497.patch", "merged_at": "2023-02-02T11:17...
true
1,567,301,765
5,496
Add a `reduce` method
closed
[ "Hi! Sure, feel free to open a PR, so we can see the API you have in mind.", "I would like to give it a go! #self-assign", "Closing as `Dataset.map` can be used instead (see https://github.com/huggingface/datasets/pull/5533#issuecomment-1440571658 and https://github.com/huggingface/datasets/pull/5533#issuecomme...
2023-02-02T04:30:22
2024-11-12T05:58:14
2023-07-21T14:24:32
### Feature request Right now the `Dataset` class implements `map()` and `filter()`, but leaves out the third functional idiom popular among Python users: `reduce`. ### Motivation A `reduce` method is often useful when calculating dataset statistics, for example, the occurrence of a particular n-gram or the average...
zhangir-azerbayev
https://github.com/huggingface/datasets/issues/5496
null
false
1,566,803,452
5,495
to_tf_dataset fails with datetime UTC columns even if not included in columns argument
closed
[ "Hi! This is indeed a bug in our zero-copy logic.\r\n\r\nTo fix it, instead of the line:\r\nhttps://github.com/huggingface/datasets/blob/7cfac43b980ab9e4a69c2328f085770996323005/src/datasets/features/features.py#L702\r\n\r\nwe should have:\r\n```python\r\nreturn pa.types.is_primitive(pa_type) and not (pa.types.is_b...
2023-02-01T20:47:33
2023-02-08T14:33:19
2023-02-08T14:33:19
### Describe the bug There appears to be some eager behavior in `to_tf_dataset` that runs against every column in a dataset even if they aren't included in the columns argument. This is problematic with datetime UTC columns due to them not working with zero copy. If I don't have UTC information in my datetime column...
dwyatte
https://github.com/huggingface/datasets/issues/5495
null
false
1,566,655,348
5,494
Update audio installation doc page
closed
[ "Totally agree, the docs should be in sync with our code.\r\n\r\nIndeed to avoid confusing users, I think we should have updated the docs at the same time as this PR:\r\n- #5167", "@albertvillanova yeah sure I should have, but I forgot back then, sorry for that 😶", "No, @polinaeterna, nothing to be sorry about...
2023-02-01T19:07:50
2023-03-02T16:08:17
2023-03-02T16:08:17
Our [installation documentation page](https://huggingface.co/docs/datasets/installation#audio) says that one can use Datasets for mp3 only with `torchaudio<0.12`. `torchaudio>0.12` is actually supported too but requires a specific version of ffmpeg which is not easily installed on all linux versions but there is a cust...
polinaeterna
https://github.com/huggingface/datasets/issues/5494
null
false
1,566,637,806
5,493
Remove unused `load_from_cache_file` arg from `Dataset.shard()` docstring
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5493). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n...
2023-02-01T18:57:48
2023-02-08T15:10:46
2023-02-08T15:03:50
null
polinaeterna
https://github.com/huggingface/datasets/pull/5493
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5493", "html_url": "https://github.com/huggingface/datasets/pull/5493", "diff_url": "https://github.com/huggingface/datasets/pull/5493.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5493.patch", "merged_at": "2023-02-08T15:03...
true
1,566,604,216
5,492
Push_to_hub in a pull request
closed
[ "Assigned to myself and will get to it in the next week, but if someone finds this issue annoying and wants to submit a PR before I do, just ping me here and I'll reassign :). ", "I would like to be assigned to this issue, @nateraw . #self-assign" ]
2023-02-01T18:32:14
2023-10-16T13:30:48
2023-10-16T13:30:48
Right now `ds.push_to_hub()` can push a dataset on `main` or on a new branch with `branch=`, but there is no way to open a pull request. Even passing `branch=refs/pr/x` doesn't seem to work: it tries to create a branch with that name cc @nateraw It should be possible to tweak the use of `huggingface_hub` in `pus...
lhoestq
https://github.com/huggingface/datasets/issues/5492
null
false
1,566,235,012
5,491
[MINOR] Typo
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-02-01T14:39:39
2023-02-02T07:42:28
2023-02-02T07:35:14
null
cakiki
https://github.com/huggingface/datasets/pull/5491
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5491", "html_url": "https://github.com/huggingface/datasets/pull/5491", "diff_url": "https://github.com/huggingface/datasets/pull/5491.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5491.patch", "merged_at": "2023-02-02T07:35...
true
1,565,842,327
5,490
Do not add index column by default when exporting to CSV
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-02-01T10:20:55
2023-02-09T09:29:08
2023-02-09T09:22:23
As pointed out by @merveenoyan, default behavior of `Dataset.to_csv` adds the index as an additional column without name. This PR changes the default behavior, so that now the index column is not written. To add the index column, now you need to pass `index=True` and also `index_label=<name of the index colum>` t...
albertvillanova
https://github.com/huggingface/datasets/pull/5490
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5490", "html_url": "https://github.com/huggingface/datasets/pull/5490", "diff_url": "https://github.com/huggingface/datasets/pull/5490.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5490.patch", "merged_at": "2023-02-09T09:22...
true
1,565,761,705
5,489
Pin dill lower version
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-02-01T09:33:42
2023-02-02T07:48:09
2023-02-02T07:40:43
Pin `dill` lower version compatible with `datasets`. Related to: - #5487 - #288 Note that the required `dill._dill` module was introduced in dill-2.8.0, however we have heuristically tested that datasets can only be installed with dill>=3.0.0 (otherwise pip hangs indefinitely while preparing metadata for multip...
albertvillanova
https://github.com/huggingface/datasets/pull/5489
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5489", "html_url": "https://github.com/huggingface/datasets/pull/5489", "diff_url": "https://github.com/huggingface/datasets/pull/5489.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5489.patch", "merged_at": "2023-02-02T07:40...
true
1,565,025,262
5,488
Error loading MP3 files from CommonVoice
closed
[ "Hi @kradonneoh, thanks for reporting.\r\n\r\nPlease note that to work with audio datasets (and specifically with MP3 files) we have detailed installation instructions in our docs: https://huggingface.co/docs/datasets/installation#audio\r\n- one of the requirements is torchaudio<0.12.0\r\n\r\nLet us know if the pro...
2023-01-31T21:25:33
2023-03-02T16:25:14
2023-03-02T16:25:13
### Describe the bug When loading a CommonVoice dataset with `datasets==2.9.0` and `torchaudio>=0.12.0`, I get an error reading the audio arrays: ```python --------------------------------------------------------------------------- LibsndfileError Traceback (most recent call last) ~/.l...
kradonneoh
https://github.com/huggingface/datasets/issues/5488
null
false
1,564,480,121
5,487
Incorrect filepath for dill module
closed
[ "Hi! The correct path is still `dill._dill.XXXX` in the latest release. What do you get when you run `python -c \"import dill; print(dill.__version__)\"` in your environment?", "`0.3.6` I feel like that's bad news, because it's probably not the issue.\r\n\r\nMy mistake, about the wrong path guess. I think I did...
2023-01-31T15:01:08
2023-02-24T16:18:36
2023-02-24T16:18:36
### Describe the bug I installed the `datasets` package and when I try to `import` it, I get the following error: ``` Traceback (most recent call last): File "/var/folders/jt/zw5g74ln6tqfdzsl8tx378j00000gn/T/ipykernel_3805/3458380017.py", line 1, in <module> import datasets File "/Users/avivbrokman/...
avivbrokman
https://github.com/huggingface/datasets/issues/5487
null
false
1,564,059,749
5,486
Adding `sep` to TextConfig
open
[ "Hi @omar-araboghli, thanks for your proposal.\r\n\r\nHave you tried to use \"csv\" loader instead of \"text\"? That already has a `sep` argument.", "Hi @albertvillanova, thanks for the quick response!\r\n\r\nIndeed, I have been trying to use `csv` instead of `text`. However I am still not able to define range of...
2023-01-31T10:39:53
2023-01-31T14:50:18
null
I have a local a `.txt` file that follows the `CONLL2003` format which I need to load using `load_script`. However, by using `sample_by='line'`, one can only split the dataset into lines without splitting each line into columns. Would it be reasonable to add a `sep` argument in combination with `sample_by='paragraph'` ...
omar-araboghli
https://github.com/huggingface/datasets/issues/5486
null
false
1,563,002,829
5,485
Add section in tutorial for IterableDataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-30T18:43:04
2023-02-01T18:15:38
2023-02-01T18:08:46
Introduces an `IterableDataset` and how to access it in the tutorial section. It also adds a brief next step section at the end to provide a path for users who want more explanation and a path for users who want something more practical and learn how to preprocess these dataset types. It'll complement the awesome new d...
stevhliu
https://github.com/huggingface/datasets/pull/5485
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5485", "html_url": "https://github.com/huggingface/datasets/pull/5485", "diff_url": "https://github.com/huggingface/datasets/pull/5485.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5485.patch", "merged_at": "2023-02-01T18:08...
true
1,562,877,070
5,484
Update docs for `nyu_depth_v2` dataset
closed
[ "I think I need to create another PR on https://huggingface.co/datasets/huggingface/documentation-images/tree/main/datasets for hosting the images there?", "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the update @awsaf49 !", "> Thanks a lot for the updates!\r\n> ...
2023-01-30T17:37:08
2023-09-29T06:43:11
2023-02-05T14:15:04
This PR will fix the issue mentioned in #5461. Here is brief overview, ## Bug: Discrepancy between depth map of `nyu_depth_v2` dataset [here](https://huggingface.co/docs/datasets/main/en/depth_estimation) and actual depth map. Depth values somehow got **discretized/clipped** resulting in depth maps that are diffe...
awsaf49
https://github.com/huggingface/datasets/pull/5484
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5484", "html_url": "https://github.com/huggingface/datasets/pull/5484", "diff_url": "https://github.com/huggingface/datasets/pull/5484.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5484.patch", "merged_at": "2023-02-05T14:15...
true
1,560,894,690
5,483
Unable to upload dataset
closed
[ "Seems to work now, perhaps it was something internal with our university's network." ]
2023-01-28T15:18:26
2023-01-29T08:09:49
2023-01-29T08:09:49
### Describe the bug Uploading a simple dataset ends with an exception ### Steps to reproduce the bug I created a new conda env with python 3.10, pip installed datasets and: ```python >>> from datasets import load_dataset, load_from_disk, Dataset >>> d = Dataset.from_dict({"text": ["hello"] * 2}) >>> d.pus...
yuvalkirstain
https://github.com/huggingface/datasets/issues/5483
null
false
1,560,853,137
5,482
Reload features from Parquet metadata
closed
[ "I'd be happy to have a look, if nobody else has started working on this yet @lhoestq. \r\n\r\nIt seems to me that for the `arrow` format features are currently attached as metadata [in `datasets.arrow_writer`](https://github.com/huggingface/datasets/blob/5f810b7011a8a4ab077a1847c024d2d9e267b065/src/datasets/arrow_...
2023-01-28T13:12:31
2023-02-12T15:57:02
2023-02-12T15:57:02
The idea would be to allow this : ```python ds.to_parquet("my_dataset/ds.parquet") reloaded = load_dataset("my_dataset") assert ds.features == reloaded.features ``` And it should also work with Image and Audio types (right now they're reloaded as a dict type) This can be implemented by storing and reading th...
lhoestq
https://github.com/huggingface/datasets/issues/5482
null
false
1,560,468,195
5,481
Load a cached dataset as iterable
open
[ "Can I work on this issue? I am pretty new to this.", "Hi ! Sure :) you can comment `#self-assign` to assign yourself to this issue.\r\n\r\nI can give you some pointers to get started:\r\n\r\n`load_dataset` works roughly this way:\r\n1. it instantiate a dataset builder using `load_dataset_builder()`\r\n2. the bui...
2023-01-27T21:43:51
2025-06-19T19:30:52
null
The idea would be to allow something like ```python ds = load_dataset("c4", "en", as_iterable=True) ``` To be used to train models. It would load an IterableDataset from the cached Arrow files. Cc @stas00 Edit : from the discussions we may load from cache when streaming=True
lhoestq
https://github.com/huggingface/datasets/issues/5481
null
false
1,560,364,866
5,480
Select columns of Dataset or DatasetDict
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-27T20:06:16
2023-02-13T11:10:13
2023-02-13T09:59:35
Close #5474 and #5468.
daskol
https://github.com/huggingface/datasets/pull/5480
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5480", "html_url": "https://github.com/huggingface/datasets/pull/5480", "diff_url": "https://github.com/huggingface/datasets/pull/5480.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5480.patch", "merged_at": "2023-02-13T09:59...
true
1,560,357,590
5,479
audiofolder works on local env, but creates empty dataset in a remote one, what dependencies could I be missing/outdated
closed
[]
2023-01-27T20:01:22
2023-01-29T05:23:14
2023-01-29T05:23:14
### Describe the bug I'm using a custom audio dataset (400+ audio files) in the correct format for audiofolder. Although loading the dataset with audiofolder works in one local setup, it doesn't in a remote one (it just creates an empty dataset). I have both ffmpeg and libndfile installed on both computers, what cou...
jcho19
https://github.com/huggingface/datasets/issues/5479
null
false
1,560,357,583
5,478
Tip for recomputing metadata
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-27T20:01:22
2023-01-30T19:22:21
2023-01-30T19:15:26
From this [feedback](https://discuss.huggingface.co/t/nonmatchingsplitssizeserror/30033) on the forum, thought I'd include a tip for recomputing the metadata numbers if it is your own dataset.
stevhliu
https://github.com/huggingface/datasets/pull/5478
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5478", "html_url": "https://github.com/huggingface/datasets/pull/5478", "diff_url": "https://github.com/huggingface/datasets/pull/5478.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5478.patch", "merged_at": "2023-01-30T19:15...
true
1,559,909,892
5,477
Unpin sqlalchemy once issue is fixed
closed
[ "@albertvillanova It looks like that issue has been fixed so I made a PR to unpin sqlalchemy! ", "The source issue:\r\n- https://github.com/pandas-dev/pandas/issues/40686\r\n\r\nhas been fixed:\r\n- https://github.com/pandas-dev/pandas/pull/48576\r\n\r\nThe fix was released yesterday (2023-04-03) only in `pandas-...
2023-01-27T15:01:55
2024-01-26T14:50:45
2024-01-26T14:50:45
Once the source issue is fixed: - pandas-dev/pandas#51015 we should revert the pin introduced in: - #5476
albertvillanova
https://github.com/huggingface/datasets/issues/5477
null
false
1,559,594,684
5,476
Pin sqlalchemy
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-27T11:26:38
2023-01-27T12:06:51
2023-01-27T11:57:48
since sqlalchemy update to 2.0.0 the CI started to fail: https://github.com/huggingface/datasets/actions/runs/4023742457/jobs/6914976514 the error comes from pandas: https://github.com/pandas-dev/pandas/issues/51015
lhoestq
https://github.com/huggingface/datasets/pull/5476
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5476", "html_url": "https://github.com/huggingface/datasets/pull/5476", "diff_url": "https://github.com/huggingface/datasets/pull/5476.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5476.patch", "merged_at": "2023-01-27T11:57...
true
1,559,030,149
5,475
Dataset scan time is much slower than using native arrow
closed
[ "Hi ! In your code you only iterate on the Arrow buffers - you don't actually load the data as python objects. For a fair comparison, you can modify your code using:\r\n```diff\r\n- for _ in range(0, len(table), bsz):\r\n- _ = {k:table[k][_ : _ + bsz] for k in cols}\r\n+ for _ in range(0, len(table)...
2023-01-27T01:32:25
2023-01-30T16:17:11
2023-01-30T16:17:11
### Describe the bug I'm basically running the same scanning experiment from the tutorials https://huggingface.co/course/chapter5/4?fw=pt except now I'm comparing to a native pyarrow version. I'm finding that the native pyarrow approach is much faster (2 orders of magnitude). Is there something I'm missing that exp...
jonny-cyberhaven
https://github.com/huggingface/datasets/issues/5475
null
false
1,558,827,155
5,474
Column project operation on `datasets.Dataset`
closed
[ "Hi ! This would be a nice addition indeed :) This sounds like a duplicate of https://github.com/huggingface/datasets/issues/5468\r\n\r\n> Not sure. Some of my PRs are still open and some do not have any discussions.\r\n\r\nSorry to hear that, feel free to ping me on those PRs" ]
2023-01-26T21:47:53
2023-02-13T09:59:37
2023-02-13T09:59:37
### Feature request There is no operation to select a subset of columns of original dataset. Expected API follows. ```python a = Dataset.from_dict({ 'int': [0, 1, 2] 'char': ['a', 'b', 'c'], 'none': [None] * 3, }) b = a.project('int', 'char') # usually, .select() print(a.column_names) # std...
daskol
https://github.com/huggingface/datasets/issues/5474
null
false
1,558,668,197
5,473
Set dev version
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-26T19:34:44
2023-01-26T19:47:34
2023-01-26T19:38:30
null
lhoestq
https://github.com/huggingface/datasets/pull/5473
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5473", "html_url": "https://github.com/huggingface/datasets/pull/5473", "diff_url": "https://github.com/huggingface/datasets/pull/5473.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5473.patch", "merged_at": "2023-01-26T19:38...
true
1,558,662,251
5,472
Release: 2.9.0
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-26T19:29:42
2023-01-26T19:40:44
2023-01-26T19:33:00
null
lhoestq
https://github.com/huggingface/datasets/pull/5472
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5472", "html_url": "https://github.com/huggingface/datasets/pull/5472", "diff_url": "https://github.com/huggingface/datasets/pull/5472.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5472.patch", "merged_at": "2023-01-26T19:33...
true
1,558,557,545
5,471
Add num_test_batches option
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "I thought this issue was resolved in my parallel `to_tf_dataset` PR! I changed the default `num_test_batches` in `_get_output_signature` to 20 and used a test batch size of 1 to maximize variance to detect shorter samples. I think it...
2023-01-26T18:09:40
2023-01-27T18:16:45
2023-01-27T18:08:36
`to_tf_dataset` calls can be very costly because of the number of test batches drawn during `_get_output_signature`. The test batches are draw in order to estimate the shapes when creating the tensorflow dataset. This is necessary when the shapes can be irregular, but not in cases when the tensor shapes are the same ac...
amyeroberts
https://github.com/huggingface/datasets/pull/5471
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5471", "html_url": "https://github.com/huggingface/datasets/pull/5471", "diff_url": "https://github.com/huggingface/datasets/pull/5471.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5471.patch", "merged_at": "2023-01-27T18:08...
true
1,558,542,611
5,470
Update dataset card creation
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "The CI failure is unrelated to your PR - feel free to merge :)", "Haha thanks, you read my mind :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n##...
2023-01-26T17:57:51
2023-01-27T16:27:00
2023-01-27T16:20:10
Encourages users to create a dataset card on the Hub directly with the new metadata ui + import dataset card template instead of telling users to manually create and upload one.
stevhliu
https://github.com/huggingface/datasets/pull/5470
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5470", "html_url": "https://github.com/huggingface/datasets/pull/5470", "diff_url": "https://github.com/huggingface/datasets/pull/5470.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5470.patch", "merged_at": "2023-01-27T16:20...
true
1,558,346,906
5,469
Remove deprecated `shard_size` arg from `.push_to_hub()`
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-26T15:40:56
2023-01-26T17:37:51
2023-01-26T17:30:59
The docstrings say that it was supposed to be deprecated since version 2.4.0, can we remove it?
polinaeterna
https://github.com/huggingface/datasets/pull/5469
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5469", "html_url": "https://github.com/huggingface/datasets/pull/5469", "diff_url": "https://github.com/huggingface/datasets/pull/5469.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5469.patch", "merged_at": "2023-01-26T17:30...
true
1,558,066,625
5,468
Allow opposite of remove_columns on Dataset and DatasetDict
closed
[ "Hi! I agree it would be nice to have a method like that. Instead of `keep_columns`, we can name it `select_columns` to be more aligned with PyArrow's naming convention (`pa.Table.select`).", "Hi, I am a newbie to open source and would like to contribute. @mariosasko can I take up this issue ?", "Hey, I also wa...
2023-01-26T12:28:09
2023-02-13T09:59:38
2023-02-13T09:59:38
### Feature request In this blog post https://huggingface.co/blog/audio-datasets, I noticed the following code: ```python COLUMNS_TO_KEEP = ["text", "audio"] all_columns = gigaspeech["train"].column_names columns_to_remove = set(all_columns) - set(COLUMNS_TO_KEEP) gigaspeech = gigaspeech.remove_columns(column...
hollance
https://github.com/huggingface/datasets/issues/5468
null
false
1,557,898,273
5,467
Fix conda command in readme
closed
[ "ah didn't read well - it's all good", "or maybe it isn't ? `-c huggingface -c conda-forge` installs from HF or from conda-forge ?", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | ...
2023-01-26T10:03:01
2023-09-24T10:06:59
2023-01-26T18:29:37
The [conda forge channel](https://anaconda.org/conda-forge/datasets) is lagging behind (as of right now, only 2.7.1 is available), we should recommend using the [Hugging face channel](https://anaconda.org/HuggingFace/datasets) that we are maintaining ``` conda install -c huggingface datasets ```
lhoestq
https://github.com/huggingface/datasets/pull/5467
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5467", "html_url": "https://github.com/huggingface/datasets/pull/5467", "diff_url": "https://github.com/huggingface/datasets/pull/5467.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5467.patch", "merged_at": null }
true
1,557,584,845
5,466
remove pathlib.Path with URIs
closed
[ "Thanks !\r\n`os.path.join` will use a backslash `\\` on windows which will also fail. You can use this instead in `load_from_disk`:\r\n```python\r\nfrom .filesystems import is_remote_filesystem\r\n\r\nis_local = not is_remote_filesystem(fs)\r\npath_join = os.path.join if is_local else posixpath.join\r\n```", "Th...
2023-01-26T03:25:45
2023-01-26T17:08:57
2023-01-26T16:59:11
Pathlib will convert "//" to "/" which causes retry errors when downloading from cloud storage
jonny-cyberhaven
https://github.com/huggingface/datasets/pull/5466
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5466", "html_url": "https://github.com/huggingface/datasets/pull/5466", "diff_url": "https://github.com/huggingface/datasets/pull/5466.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5466.patch", "merged_at": "2023-01-26T16:59...
true
1,557,510,618
5,465
audiofolder creates empty dataset even though the dataset passed in follows the correct structure
closed
[]
2023-01-26T01:45:45
2023-01-26T08:48:45
2023-01-26T08:48:45
### Describe the bug The structure of my dataset folder called "my_dataset" is : data metadata.csv The data folder consists of all mp3 files and metadata.csv consist of file locations like 'data/...mp3 and transcriptions. There's 400+ mp3 files and corresponding transcriptions for my dataset. When I run the follo...
jcho19
https://github.com/huggingface/datasets/issues/5465
null
false
1,557,462,104
5,464
NonMatchingChecksumError for hendrycks_test
closed
[ "Thanks for reporting, @sarahwie.\r\n\r\nPlease note this issue was already fixed in `datasets` 2.6.0 version:\r\n- #5040\r\n\r\nIf you update your `datasets` version, you will be able to load the dataset:\r\n```\r\npip install -U datasets\r\n```", "Oops, missed that I needed to upgrade. Thanks!" ]
2023-01-26T00:43:23
2023-01-27T05:44:31
2023-01-26T07:41:58
### Describe the bug The checksum of the file has likely changed on the remote host. ### Steps to reproduce the bug `dataset = nlp.load_dataset("hendrycks_test", "anatomy")` ### Expected behavior no error thrown ### Environment info - `datasets` version: 2.2.1 - Platform: macOS-13.1-arm64-arm-64bit - Pyt...
sarahwie
https://github.com/huggingface/datasets/issues/5464
null
false
1,557,021,041
5,463
Imagefolder docs: mention support of CSV and ZIP
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-25T17:24:01
2023-01-25T18:33:35
2023-01-25T18:26:15
null
lhoestq
https://github.com/huggingface/datasets/pull/5463
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5463", "html_url": "https://github.com/huggingface/datasets/pull/5463", "diff_url": "https://github.com/huggingface/datasets/pull/5463.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5463.patch", "merged_at": "2023-01-25T18:26...
true
1,556,572,144
5,462
Concatenate on axis=1 with misaligned blocks
closed
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-01-25T12:33:22
2023-01-26T09:37:00
2023-01-26T09:27:19
Allow to concatenate on axis 1 two tables made of misaligned blocks. For example if the first table has 2 row blocks of 3 rows each, and the second table has 3 row blocks or 2 rows each. To do that, I slice the row blocks to re-align the blocks. Fix https://github.com/huggingface/datasets/issues/5413
lhoestq
https://github.com/huggingface/datasets/pull/5462
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5462", "html_url": "https://github.com/huggingface/datasets/pull/5462", "diff_url": "https://github.com/huggingface/datasets/pull/5462.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5462.patch", "merged_at": "2023-01-26T09:27...
true
1,555,532,719
5,461
Discrepancy in `nyu_depth_v2` dataset
open
[ "Ccing @dwofk (the author of `fast-depth`). \r\n\r\nThanks, @awsaf49 for reporting this. I believe this is because the NYU Depth V2 shipped from `fast-depth` is already preprocessed. \r\n\r\nIf you think it might be better to have the NYU Depth V2 dataset from BTS [here](https://huggingface.co/datasets/sayakpaul/ny...
2023-01-24T19:15:46
2023-02-06T20:52:00
null
### Describe the bug I think there is a discrepancy between depth map of `nyu_depth_v2` dataset [here](https://huggingface.co/docs/datasets/main/en/depth_estimation) and actual depth map. Depth values somehow got **discretized/clipped** resulting in depth maps that are different from actual ones. Here is a side-by-sid...
awsaf49
https://github.com/huggingface/datasets/issues/5461
null
false
1,555,387,532
5,460
Document that removing all the columns returns an empty document and the num_row is lost
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-24T17:33:38
2023-01-25T16:11:10
2023-01-25T16:04:03
null
thomasw21
https://github.com/huggingface/datasets/pull/5460
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5460", "html_url": "https://github.com/huggingface/datasets/pull/5460", "diff_url": "https://github.com/huggingface/datasets/pull/5460.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5460.patch", "merged_at": "2023-01-25T16:04...
true
1,555,367,504
5,459
Disable aiohttp requoting of redirection URL
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "Comment by @lhoestq:\r\n> Do you think we need this in `datasets` if it's fixed on the moon landing side ? In the aiohttp doc they consider those symbols as \"non-safe\" ", "The lib `requests` does not perform that requote on redir...
2023-01-24T17:18:59
2024-09-01T18:08:31
2023-01-31T08:37:54
The library `aiohttp` performs a requoting of redirection URLs that unquotes the single quotation mark character: `%27` => `'` This is a problem for our Hugging Face Hub, which requires exact URL from location header. Specifically, in the query component of the URL (`https://netloc/path?query`), the value for `re...
albertvillanova
https://github.com/huggingface/datasets/pull/5459
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5459", "html_url": "https://github.com/huggingface/datasets/pull/5459", "diff_url": "https://github.com/huggingface/datasets/pull/5459.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5459.patch", "merged_at": "2023-01-31T08:37...
true
1,555,054,737
5,458
slice split while streaming
closed
[ "Hi! Yes, that's correct. When `streaming` is `True`, only split names can be specified as `split`, and for slicing, you have to use `.skip`/`.take` instead.\r\n\r\nE.g. \r\n`load_dataset(\"lhoestq/demo1\",revision=None, streaming=True, split=\"train[:3]\")`\r\n\r\nrewritten with `.skip`/`.take`:\r\n`load_dataset(\...
2023-01-24T14:08:17
2023-01-24T15:11:47
2023-01-24T15:11:47
### Describe the bug When using the `load_dataset` function with streaming set to True, slicing splits is apparently not supported. Did I miss this in the documentation? ### Steps to reproduce the bug `load_dataset("lhoestq/demo1",revision=None, streaming=True, split="train[:3]")` causes ValueError: Bad split:...
SvenDS9
https://github.com/huggingface/datasets/issues/5458
null
false
1,554,171,264
5,457
prebuilt dataset relies on `downloads/extracted`
open
[ "Hi! \r\n\r\nThis issue is due to our audio/image datasets not being self-contained. This allows us to save disk space (files are written only once) but also leads to the issues like this one. We plan to make all our datasets self-contained in Datasets 3.0.\r\n\r\nIn the meantime, you can run the following map to e...
2023-01-24T02:09:32
2024-11-18T07:43:51
null
### Describe the bug I pre-built the dataset: ``` python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing ``` and it can be used just fine. now I wipe out `downloads/extracted` and it no longer works. ``` rm -r ~/.cache/huggingface...
stas00
https://github.com/huggingface/datasets/issues/5457
null
false
1,553,905,148
5,456
feat: tqdm for `to_parquet`
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-23T22:05:38
2023-01-24T11:26:47
2023-01-24T11:17:12
As described in #5418 I noticed also that the `to_json` function supports multi-workers whereas `to_parquet`, is that not possible/not needed with Parquet or something that hasn't been implemented yet?
zanussbaum
https://github.com/huggingface/datasets/pull/5456
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5456", "html_url": "https://github.com/huggingface/datasets/pull/5456", "diff_url": "https://github.com/huggingface/datasets/pull/5456.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5456.patch", "merged_at": "2023-01-24T11:17...
true
1,553,040,080
5,455
Single TQDM bar in multi-proc map
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-23T12:49:40
2023-02-13T20:23:34
2023-02-13T20:16:38
Use the "shard generator approach with periodic progress updates" (used in `save_to_disk` and multi-proc `load_dataset`) in `Dataset.map` to enable having a single TQDM progress bar in the multi-proc mode. Closes https://github.com/huggingface/datasets/issues/771, closes https://github.com/huggingface/datasets/issue...
mariosasko
https://github.com/huggingface/datasets/pull/5455
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5455", "html_url": "https://github.com/huggingface/datasets/pull/5455", "diff_url": "https://github.com/huggingface/datasets/pull/5455.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5455.patch", "merged_at": "2023-02-13T20:16...
true
1,552,890,419
5,454
Save and resume the state of a DataLoader
open
[ "Something that'd be nice to have is \"manual update of state\". One of the learning from training LLMs is the ability to skip some batches whenever we notice huge spike might be handy.", "Your outline spec is very sound and clear, @lhoestq - thank you!\r\n\r\n@thomasw21, indeed that would be a wonderful extra fe...
2023-01-23T10:58:54
2024-11-27T01:19:21
null
It would be nice when using `datasets` with a PyTorch DataLoader to be able to resume a training from a DataLoader state (e.g. to resume a training that crashed) What I have in mind (but lmk if you have other ideas or comments): For map-style datasets, this requires to have a PyTorch Sampler state that can be sav...
lhoestq
https://github.com/huggingface/datasets/issues/5454
null
false
1,552,727,425
5,453
Fix base directory while extracting insecure TAR files
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-23T08:57:40
2023-01-24T01:34:20
2023-01-23T10:10:42
This PR fixes the extraction of insecure TAR files by changing the base path against which TAR members are compared: - from: "." - to: `output_path` This PR also adds tests for extracting insecure TAR files. Related to: - #5441 - #5452 @stas00 please note this PR addresses just one of the issues you pointe...
albertvillanova
https://github.com/huggingface/datasets/pull/5453
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5453", "html_url": "https://github.com/huggingface/datasets/pull/5453", "diff_url": "https://github.com/huggingface/datasets/pull/5453.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5453.patch", "merged_at": "2023-01-23T10:10...
true
1,552,655,939
5,452
Swap log messages for symbolic/hard links in tar extractor
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-23T07:53:38
2023-01-23T09:40:55
2023-01-23T08:31:17
The log messages do not match their if-condition. This PR swaps them. Found while investigating: - #5441 CC: @lhoestq
albertvillanova
https://github.com/huggingface/datasets/pull/5452
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5452", "html_url": "https://github.com/huggingface/datasets/pull/5452", "diff_url": "https://github.com/huggingface/datasets/pull/5452.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5452.patch", "merged_at": "2023-01-23T08:31...
true
1,552,336,300
5,451
ImageFolder BadZipFile: Bad offset for central directory
closed
[ "Hi ! Could you share the full stack trace ? Which dataset did you try to load ?\r\n\r\nit may be related to https://github.com/huggingface/datasets/pull/5640", "The `BadZipFile` error means the ZIP file is corrupted, so I'm closing this issue as it's not directly related to `datasets`.", "For others that find ...
2023-01-22T23:50:12
2023-05-23T10:35:48
2023-02-10T16:31:36
### Describe the bug I'm getting the following exception: ``` lib/python3.10/zipfile.py:1353 in _RealGetContents │ │ │ │ 1350 │ │ # self.start_dir: Position of start of central directory ...
hmartiro
https://github.com/huggingface/datasets/issues/5451
null
false
1,551,109,365
5,450
to_tf_dataset with a TF collator causes bizarrely persistent slowdown
closed
[ "wtf", "Couldn't find what's causing this, this will need more investigation", "A possible hint: The function it seems to be spending a lot of time in (when iterating over the original dataset) is `_get_mp` in the PIL JPEG decoder: \r\n![image](https://user-images.githubusercontent.com/12866554/214057267-c889f0...
2023-01-20T16:08:37
2023-02-13T14:13:34
2023-02-13T14:13:34
### Describe the bug This will make more sense if you take a look at [a Colab notebook that reproduces this issue.](https://colab.research.google.com/drive/1rxyeciQFWJTI0WrZ5aojp4Ls1ut18fNH?usp=sharing) Briefly, there are several datasets that, when you iterate over them with `to_tf_dataset` **and** a data colla...
Rocketknight1
https://github.com/huggingface/datasets/issues/5450
null
false
1,550,801,453
5,449
Support fsspec 2023.1.0 in CI
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-20T12:53:17
2023-01-20T13:32:50
2023-01-20T13:26:03
Support fsspec 2023.1.0 in CI. In the 2023.1.0 fsspec release, they replaced the type of `fsspec.registry`: - from `ReadOnlyRegistry`, with an attribute called `target` - to `MappingProxyType`, without that attribute Consequently, we need to change our `mock_fsspec` fixtures, that were using the `target` attrib...
albertvillanova
https://github.com/huggingface/datasets/pull/5449
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5449", "html_url": "https://github.com/huggingface/datasets/pull/5449", "diff_url": "https://github.com/huggingface/datasets/pull/5449.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5449.patch", "merged_at": "2023-01-20T13:26...
true
1,550,618,514
5,448
Support fsspec 2023.1.0 in CI
closed
[]
2023-01-20T10:26:31
2023-01-20T13:26:05
2023-01-20T13:26:05
Once we find out the root cause of: - #5445 we should revert the temporary pin on fsspec introduced by: - #5447
albertvillanova
https://github.com/huggingface/datasets/issues/5448
null
false
1,550,599,193
5,447
Fix CI by temporarily pinning fsspec < 2023.1.0
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-20T10:11:02
2023-01-20T10:38:13
2023-01-20T10:28:43
Temporarily pin fsspec < 2023.1.0 Fix #5445.
albertvillanova
https://github.com/huggingface/datasets/pull/5447
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5447", "html_url": "https://github.com/huggingface/datasets/pull/5447", "diff_url": "https://github.com/huggingface/datasets/pull/5447.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5447.patch", "merged_at": "2023-01-20T10:28...
true
1,550,591,588
5,446
test v0.12.0.rc0
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "@Wauplin I was testing it in a dedicated branch without opening a PR: https://github.com/huggingface/datasets/commits/test-hfh-0.12.0rc0", "Oops, sorry @albertvillanova. I thought for next time I'll start the CIs before pinging eve...
2023-01-20T10:05:19
2023-01-20T10:43:22
2023-01-20T10:13:48
DO NOT MERGE. Only to test the CI. cc @lhoestq @albertvillanova
Wauplin
https://github.com/huggingface/datasets/pull/5446
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5446", "html_url": "https://github.com/huggingface/datasets/pull/5446", "diff_url": "https://github.com/huggingface/datasets/pull/5446.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5446.patch", "merged_at": null }
true
1,550,588,703
5,445
CI tests are broken: AttributeError: 'mappingproxy' object has no attribute 'target'
closed
[]
2023-01-20T10:03:10
2023-01-20T10:28:44
2023-01-20T10:28:44
CI tests are broken, raising `AttributeError: 'mappingproxy' object has no attribute 'target'`. See: https://github.com/huggingface/datasets/actions/runs/3966497597/jobs/6797384185 ``` ... ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]/*-expected_path...
albertvillanova
https://github.com/huggingface/datasets/issues/5445
null
false
1,550,185,071
5,444
info messages logged as warnings
closed
[ "Looks like a duplicate of https://github.com/huggingface/datasets/issues/1948. \r\n\r\nI also think these should be logged as INFO messages, but let's see what @lhoestq thinks.", "It can be considered unexpected to see a `map` function return instantaneously. The warning is here to explain this case by mentionin...
2023-01-20T01:19:18
2023-07-12T17:19:31
2023-07-12T17:19:31
### Describe the bug Code in `datasets` is using `logger.warning` when it should be using `logger.info`. Some of these are probably a matter of opinion, but I think anything starting with `logger.warning(f"Loading chached` clearly falls into the info category. Definitions from the Python docs for reference: * I...
davidgilbertson
https://github.com/huggingface/datasets/issues/5444
null
false
1,550,178,914
5,443
Update share tutorial
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-20T01:09:14
2023-01-20T15:44:45
2023-01-20T15:37:30
Based on feedback from discussion #5423, this PR updates the sharing tutorial with a mention of writing your own dataset loading script to support more advanced dataset creation options like multiple configs. I'll open a separate PR to update the *Create a Dataset card* with the new Hub metadata UI update 😄
stevhliu
https://github.com/huggingface/datasets/pull/5443
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5443", "html_url": "https://github.com/huggingface/datasets/pull/5443", "diff_url": "https://github.com/huggingface/datasets/pull/5443.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5443.patch", "merged_at": "2023-01-20T15:37...
true
1,550,084,450
5,442
OneDrive Integrations with HF Datasets
closed
[ "Hi! \r\n\r\nWe use [`fsspec`](https://github.com/fsspec/filesystem_spec) to integrate with storage providers. You can find more info (and the usage examples) in [our docs](https://huggingface.co/docs/datasets/v2.8.0/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage).\r\n\r\n[`gdrivefs`](https://githu...
2023-01-19T23:12:08
2023-02-24T16:17:51
2023-02-24T16:17:51
### Feature request First of all , I would like to thank all community who are developed DataSet storage and make it free available How to integrate our Onedrive account or any other possible storage clouds (like google drive,...) with the **HF** datasets section. For example, if I have **50GB** on my **Onedrive*...
Mohammed20201991
https://github.com/huggingface/datasets/issues/5442
null
false
1,548,417,594
5,441
resolving a weird tar extract issue
open
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-01-19T02:17:21
2023-01-20T16:49:22
null
ok, every so often, I have been getting a strange failure on dataset install: ``` $ python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing No config specified, defaulting to: general-pmd-synthetic-testing/100.unique Downloading and prep...
stas00
https://github.com/huggingface/datasets/pull/5441
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5441", "html_url": "https://github.com/huggingface/datasets/pull/5441", "diff_url": "https://github.com/huggingface/datasets/pull/5441.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5441.patch", "merged_at": null }
true
1,538,361,143
5,440
Fix documentation about batch samplers
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-18T17:04:27
2023-01-18T17:57:29
2023-01-18T17:50:04
null
thomasw21
https://github.com/huggingface/datasets/pull/5440
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5440", "html_url": "https://github.com/huggingface/datasets/pull/5440", "diff_url": "https://github.com/huggingface/datasets/pull/5440.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5440.patch", "merged_at": "2023-01-18T17:50...
true
1,537,973,564
5,439
[dataset request] Add Common Voice 12.0
closed
[ "@polinaeterna any tentative date on when the Common Voice 12.0 dataset will be added ?", "This dataset is now hosted on the Hub here: https://huggingface.co/datasets/mozilla-foundation/common_voice_12_0" ]
2023-01-18T13:07:05
2023-07-21T14:26:10
2023-07-21T14:26:09
### Feature request Please add the common voice 12_0 datasets. Apart from English, a significant amount of audio-data has been added to the other minor-language datasets. ### Motivation The dataset link: https://commonvoice.mozilla.org/en/datasets
MohammedRakib
https://github.com/huggingface/datasets/issues/5439
null
false
1,537,489,730
5,438
Update actions/checkout in CD Conda release
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-18T06:53:15
2023-01-18T13:49:51
2023-01-18T13:42:49
This PR updates the "checkout" GitHub Action to its latest version, as previous ones are deprecated: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/
albertvillanova
https://github.com/huggingface/datasets/pull/5438
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5438", "html_url": "https://github.com/huggingface/datasets/pull/5438", "diff_url": "https://github.com/huggingface/datasets/pull/5438.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5438.patch", "merged_at": "2023-01-18T13:42...
true
1,536,837,144
5,437
Can't load png dataset with 4 channel (RGBA)
closed
[ "Hi! Can you please share the directory structure of your image folder and the `load_dataset` call? We decode images with Pillow, and Pillow supports RGBA PNGs, so this shouldn't be a problem.\r\n\r\n", "> Hi! Can you please share the directory structure of your image folder and the `load_dataset` call? We decode...
2023-01-17T18:22:27
2023-01-18T20:20:15
2023-01-18T20:20:15
I try to create dataset which contains about 9000 png images 64x64 in size, and they are all 4-channel (RGBA). When trying to use load_dataset() then a dataset is created from only 2 images. What exactly interferes I can not understand.![Screenshot_20230117_212213.jpg](https://user-images.githubusercontent.com/41611046...
WiNE-iNEFF
https://github.com/huggingface/datasets/issues/5437
null
false
1,536,633,173
5,436
Revert container image pin in CI benchmarks
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-17T15:59:50
2023-01-18T09:05:49
2023-01-18T06:29:06
Closes #5433, reverts #5432, and also: * Uses [ghcr.io container images](https://cml.dev/doc/self-hosted-runners/#docker-images) for extra speed * Updates `actions/checkout` to `v3` (note that `v2` is [deprecated](https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead...
0x2b3bfa0
https://github.com/huggingface/datasets/pull/5436
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5436", "html_url": "https://github.com/huggingface/datasets/pull/5436", "diff_url": "https://github.com/huggingface/datasets/pull/5436.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5436.patch", "merged_at": "2023-01-18T06:29...
true
1,536,099,300
5,435
Wrong statement in "Load a Dataset in Streaming mode" leads to data leakage
closed
[ "Just for your information, Tensorflow confirmed this issue [here.](https://github.com/tensorflow/tensorflow/issues/59279)", "Thanks for reporting, @HaoyuYang59.\r\n\r\nPlease note that these are different \"dataset\" objects: our docs refer to Hugging Face `datasets.Dataset` and not to TensorFlow `tf.data.Datase...
2023-01-17T10:04:16
2023-01-19T09:56:03
2023-01-19T09:56:03
### Describe the bug In the [Split your dataset with take and skip](https://huggingface.co/docs/datasets/v1.10.2/dataset_streaming.html#split-your-dataset-with-take-and-skip), it states: > Using take (or skip) prevents future calls to shuffle from shuffling the dataset shards order, otherwise the taken examples cou...
DanielYang59
https://github.com/huggingface/datasets/issues/5435
null
false
1,536,090,042
5,434
sample_dataset module not found
closed
[ "Hi! Can you describe what the actual error is?", "working on the setfit example script\r\n\r\n from setfit import SetFitModel, SetFitTrainer, sample_dataset\r\n\r\nImportError: cannot import name 'sample_dataset' from 'setfit' (C:\\Python\\Python38\\lib\\site-packages\\setfit\\__init__.py)\r\n\r\n apart from t...
2023-01-17T09:57:54
2023-01-19T13:52:12
2023-01-19T07:55:11
null
nickums
https://github.com/huggingface/datasets/issues/5434
null
false
1,536,017,901
5,433
Support latest Docker image in CI benchmarks
closed
[ "Sorry, it was us:[^1] https://github.com/iterative/cml/pull/1317 & https://github.com/iterative/cml/issues/1319#issuecomment-1385599559; should be fixed with [v0.18.17](https://github.com/iterative/cml/releases/tag/v0.18.17).\r\n\r\n[^1]: More or less, see https://github.com/yargs/yargs/issues/873.", "Opened htt...
2023-01-17T09:06:08
2023-01-18T06:29:08
2023-01-18T06:29:08
Once we find out the root cause of: - #5431 we should revert the temporary pin on the Docker image version introduced by: - #5432
albertvillanova
https://github.com/huggingface/datasets/issues/5433
null
false
1,535,893,019
5,432
Fix CI benchmarks by temporarily pinning Docker image version
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-17T07:15:31
2023-01-17T08:58:22
2023-01-17T08:51:17
This PR fixes CI benchmarks, by temporarily pinning Docker image version, instead of "latest" tag. It also updates deprecated `cml-send-comment` command and using `cml comment create` instead. Fix #5431.
albertvillanova
https://github.com/huggingface/datasets/pull/5432
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5432", "html_url": "https://github.com/huggingface/datasets/pull/5432", "diff_url": "https://github.com/huggingface/datasets/pull/5432.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5432.patch", "merged_at": "2023-01-17T08:51...
true
1,535,862,621
5,431
CI benchmarks are broken: Unknown arguments: runnerPath, path
closed
[]
2023-01-17T06:49:57
2023-01-18T06:33:24
2023-01-17T08:51:18
Our CI benchmarks are broken, raising `Unknown arguments` error: https://github.com/huggingface/datasets/actions/runs/3932397079/jobs/6724905161 ``` Unknown arguments: runnerPath, path ``` Stack trace: ``` 100%|██████████| 500/500 [00:01<00:00, 338.98ba/s] Updating lock file 'dvc.lock' To track the changes ...
albertvillanova
https://github.com/huggingface/datasets/issues/5431
null
false
1,535,856,503
5,430
Support Apache Beam >= 2.44.0
closed
[ "Some of the shard files now have 0 number of rows.\r\n\r\nWe have opened an issue in the Apache Beam repo:\r\n- https://github.com/apache/beam/issues/25041" ]
2023-01-17T06:42:12
2024-02-06T19:24:21
2024-02-06T19:24:21
Once we find out the root cause of: - #5426 we should revert the temporary pin on apache-beam introduced by: - #5429
albertvillanova
https://github.com/huggingface/datasets/issues/5430
null
false
1,535,192,687
5,429
Fix CI by temporarily pinning apache-beam < 2.44.0
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2023-01-16T16:20:09
2023-01-16T16:51:42
2023-01-16T16:49:03
Temporarily pin apache-beam < 2.44.0 Fix #5426.
albertvillanova
https://github.com/huggingface/datasets/pull/5429
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5429", "html_url": "https://github.com/huggingface/datasets/pull/5429", "diff_url": "https://github.com/huggingface/datasets/pull/5429.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5429.patch", "merged_at": "2023-01-16T16:49...
true
1,535,166,139
5,428
Load/Save FAISS index using fsspec
closed
[ "Hi! Sure, feel free to submit a PR. Maybe if we want to be consistent with the existing API, it would be cleaner to directly add support for `fsspec` paths in `Dataset.load_faiss_index`/`Dataset.save_faiss_index` in the same manner as it was done in `Dataset.load_from_disk`/`Dataset.save_to_disk`.", "That's a gr...
2023-01-16T16:08:12
2023-03-27T15:18:22
2023-03-27T15:18:22
### Feature request From what I understand `faiss` already support this [link](https://github.com/facebookresearch/faiss/wiki/Index-IO,-cloning-and-hyper-parameter-tuning#generic-io-support) I would like to use a stream as input to `Dataset.load_faiss_index` and `Dataset.save_faiss_index`. ### Motivation In...
Dref360
https://github.com/huggingface/datasets/issues/5428
null
false
1,535,162,889
5,427
Unable to download dataset id_clickbait
closed
[ "Thanks for reporting, @ilos-vigil.\r\n\r\nWe have transferred this issue to the corresponding dataset on the Hugging Face Hub: https://huggingface.co/datasets/id_clickbait/discussions/1 " ]
2023-01-16T16:05:36
2023-01-18T09:51:28
2023-01-18T09:25:19
### Describe the bug I tried to download dataset `id_clickbait`, but receive this error message. ``` FileNotFoundError: Couldn't find file at https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/k42j7x2kpn-1.zip ``` When i open the link using browser, i got this XML data. ```xml <?xml versi...
ilos-vigil
https://github.com/huggingface/datasets/issues/5427
null
false
1,535,158,555
5,426
CI tests are broken: SchemaInferenceError
closed
[]
2023-01-16T16:02:07
2023-06-02T06:40:32
2023-01-16T16:49:04
CI test (unit, ubuntu-latest, deps-minimum) is broken, raising a `SchemaInferenceError`: see https://github.com/huggingface/datasets/actions/runs/3930901593/jobs/6721492004 ``` FAILED tests/test_beam.py::BeamBuilderTest::test_download_and_prepare_sharded - datasets.arrow_writer.SchemaInferenceError: Please pass `feat...
albertvillanova
https://github.com/huggingface/datasets/issues/5426
null
false
1,534,581,850
5,425
Sort on multiple keys with datasets.Dataset.sort()
closed
[ "Hi! \r\n\r\n`Dataset.sort` calls `df.sort_values` internally, and `df.sort_values` brings all the \"sort\" columns in memory, so sorting on multiple keys could be very expensive. This makes me think that maybe we can replace `df.sort_values` with `pyarrow.compute.sort_indices` - the latter can also sort on multipl...
2023-01-16T09:22:26
2023-02-24T16:15:11
2023-02-24T16:15:11
### Feature request From discussion on forum: https://discuss.huggingface.co/t/datasets-dataset-sort-does-not-preserve-ordering/29065/1 `sort()` does not preserve ordering, and it does not support sorting on multiple columns, nor a key function. The suggested solution: > ... having something similar to panda...
rocco-fortuna
https://github.com/huggingface/datasets/issues/5425
null
false
1,534,394,756
5,424
When applying `ReadInstruction` to custom load it's not DatasetDict but list of Dataset?
closed
[ "Hi! You can get a `DatasetDict` if you pass a dictionary with read instructions as follows:\r\n```python\r\ninstructions = [\r\n ReadInstruction(split_name=\"train\", from_=0, to=10, unit='%', rounding='closest'),\r\n ReadInstruction(split_name=\"dev\", from_=0, to=10, unit='%', rounding='closest'),\r\n R...
2023-01-16T06:54:28
2023-02-24T16:19:00
2023-02-24T16:19:00
### Describe the bug I am loading datasets from custom `tsv` files stored locally and applying split instructions for each split. Although the ReadInstruction is being applied correctly and I was expecting it to be `DatasetDict` but instead it is a list of `Dataset`. ### Steps to reproduce the bug Steps to reproduc...
macabdul9
https://github.com/huggingface/datasets/issues/5424
null
false
1,533,385,239
5,422
Datasets load error for saved github issues
open
[ "I can confirm that the error exists!\r\nI'm trying to read 3 parquet files locally:\r\n```python\r\nfrom datasets import load_dataset, Features, Value, ClassLabel\r\n\r\nreview_dataset = load_dataset(\r\n \"parquet\",\r\n data_files={\r\n \"train\": os.path.join(sentiment_analysis_data_path, \"train.p...
2023-01-14T17:29:38
2023-09-14T11:39:57
null
### Describe the bug Loading a previously downloaded & saved dataset as described in the HuggingFace course: issues_dataset = load_dataset("json", data_files="issues/datasets-issues.jsonl", split="train") Gives this error: datasets.builder.DatasetGenerationError: An error occurred while generating the dataset...
folterj
https://github.com/huggingface/datasets/issues/5422
null
false
1,532,278,307
5,421
Support case-insensitive Hub dataset name in load_dataset
closed
[ "Closing as case-insensitivity should be only for URL redirection on the Hub. In the APIs, we will only support the canonical name (https://github.com/huggingface/moon-landing/pull/2399#issuecomment-1382085611)" ]
2023-01-13T13:07:07
2023-01-13T20:12:32
2023-01-13T20:12:32
### Feature request The dataset name on the Hub is case-insensitive (see https://github.com/huggingface/moon-landing/pull/2399, internal issue), i.e., https://huggingface.co/datasets/GLUE redirects to https://huggingface.co/datasets/glue. Ideally, we could load the glue dataset using the following: ``` from d...
severo
https://github.com/huggingface/datasets/issues/5421
null
false
1,532,265,742
5,420
ci: 🎡 remove two obsolete issue templates
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-01-13T12:58:43
2023-01-13T13:36:00
2023-01-13T13:29:01
add-dataset is not needed anymore since the "canonical" datasets are on the Hub. And dataset-viewer is managed within the datasets-server project. See https://github.com/huggingface/datasets/issues/new/choose <img width="1245" alt="Capture d’écran 2023-01-13 à 13 59 58" src="https://user-images.githubuserconten...
severo
https://github.com/huggingface/datasets/pull/5420
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5420", "html_url": "https://github.com/huggingface/datasets/pull/5420", "diff_url": "https://github.com/huggingface/datasets/pull/5420.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5420.patch", "merged_at": "2023-01-13T13:29...
true
1,531,999,850
5,419
label_column='labels' in datasets.TextClassification and 'label' or 'label_ids' in transformers.DataColator
closed
[ "Hi! Thanks for pointing out this inconsistency. Changing the default value at this point is probably not worth it, considering we've started discussing the state of the task API internally - we will most likely deprecate the current one and replace it with a more robust solution that relies on the `train_eval_inde...
2023-01-13T09:40:07
2023-07-21T14:27:08
2023-07-21T14:27:08
### Describe the bug When preparing a dataset for a task using `datasets.TextClassification`, the output feature is named `labels`. When preparing the trainer using the `transformers.DataCollator` the default column name is `label` if binary or `label_ids` if multi-class problem. It is required to rename the column...
CreatixEA
https://github.com/huggingface/datasets/issues/5419
null
false
1,530,111,184
5,418
Add ProgressBar for `to_parquet`
closed
[ "Thanks for your proposal, @zanussbaum. Yes, I agree that would definitely be a nice feature to have!", "@albertvillanova I’m happy to make a quick PR for the feature! let me know ", "That would be awesome ! You can comment `#self-assign` to assign you to this issue and open a PR :) Will be happy to review", ...
2023-01-12T05:06:20
2023-01-24T18:18:24
2023-01-24T18:18:24
### Feature request Add a progress bar for `Dataset.to_parquet`, similar to how `to_json` works. ### Motivation It's a bit frustrating to not know how long a dataset will take to write to file and if it's stuck or not without a progress bar ### Your contribution Sure I can help if needed
zanussbaum
https://github.com/huggingface/datasets/issues/5418
null
false
1,526,988,113
5,416
Fix RuntimeError: Sharding is ambiguous for this dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._", "By the way, do we know how many datasets are impacted by this issue?\r\n\r\nMaybe we should do a patch release with this fix.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated be...
2023-01-10T08:43:19
2023-01-18T17:12:17
2023-01-18T14:09:02
This PR fixes the RuntimeError: Sharding is ambiguous for this dataset. The error for ambiguous sharding will be raised only if num_proc > 1. Fix #5415, fix #5414. Fix https://huggingface.co/datasets/ami/discussions/3.
albertvillanova
https://github.com/huggingface/datasets/pull/5416
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5416", "html_url": "https://github.com/huggingface/datasets/pull/5416", "diff_url": "https://github.com/huggingface/datasets/pull/5416.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5416.patch", "merged_at": "2023-01-18T14:09...
true
1,526,904,861
5,415
RuntimeError: Sharding is ambiguous for this dataset
closed
[]
2023-01-10T07:36:11
2023-01-18T14:09:04
2023-01-18T14:09:03
### Describe the bug When loading some datasets, a RuntimeError is raised. For example, for "ami" dataset: https://huggingface.co/datasets/ami/discussions/3 ``` .../huggingface/datasets/src/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) ...
albertvillanova
https://github.com/huggingface/datasets/issues/5415
null
false
1,525,733,818
5,414
Sharding error with Multilingual LibriSpeech
closed
[ "Thanks for reporting, @Nithin-Holla.\r\n\r\nThis is a known issue for multiple datasets and we are investigating it:\r\n- See e.g.: https://huggingface.co/datasets/ami/discussions/3", "Main issue:\r\n- #5415", "@albertvillanova Thanks! As a workaround for now, can I use the dataset in streaming mode?", "Yes,...
2023-01-09T14:45:31
2023-01-18T14:09:04
2023-01-18T14:09:04
### Describe the bug Loading the German Multilingual LibriSpeech dataset results in a RuntimeError regarding sharding with the following stacktrace: ``` Downloading and preparing dataset multilingual_librispeech/german to /home/nithin/datadrive/cache/huggingface/datasets/facebook___multilingual_librispeech/german/...
Nithin-Holla
https://github.com/huggingface/datasets/issues/5414
null
false
1,524,591,837
5,413
concatenate_datasets fails when two dataset with shards > 1 and unequal shard numbers
closed
[ "Hi ! Thanks for reporting :)\r\n\r\nI managed to reproduce the hub using\r\n```python\r\n\r\nfrom datasets import concatenate_datasets, Dataset, load_from_disk\r\n\r\nDataset.from_dict({\"a\": range(9)}).save_to_disk(\"tmp/ds1\")\r\nds1 = load_from_disk(\"tmp/ds1\")\r\nds1 = concatenate_datasets([ds1, ds1])\r\n\r\...
2023-01-08T17:01:52
2023-01-26T09:27:21
2023-01-26T09:27:21
### Describe the bug When using `concatenate_datasets([dataset1, dataset2], axis = 1)` to concatenate two datasets with shards > 1, it fails: ``` File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/combine.py", line 182, in concatenate_datasets return _concatenate_map_style_data...
ZeguanXiao
https://github.com/huggingface/datasets/issues/5413
null
false