id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
1,024,818,680
3,069
CI fails on Windows with FileNotFoundError when stting up s3_base fixture
closed
[]
2021-10-13T05:52:26
2021-10-13T08:05:49
2021-10-13T06:49:48
## Describe the bug After commit 9353fc863d0c99ab0427f83cc5a4f04fcf52f1df, the CI fails on Windows with FileNotFoundError when stting up s3_base fixture. See: https://app.circleci.com/pipelines/github/huggingface/datasets/8151/workflows/5db8d154-badd-4d3d-b202-ca7a318997a2/jobs/50321 Error summary: ``` ERROR tes...
albertvillanova
https://github.com/huggingface/datasets/issues/3069
null
false
1,024,681,264
3,068
feat: increase streaming retry config
closed
[ "@lhoestq I had 2 runs for more than 2 days each, continuously streaming (they were failing before with 3 retries at 1 sec interval).\r\n\r\nThey are running on TPU's (so great internet connection) and only had connection errors a few times each (3 & 4). Each time it worked after only 1 retry.\r\nThe reason for a h...
2021-10-13T02:00:50
2021-10-13T09:25:56
2021-10-13T09:25:54
Increase streaming config parameters: * retry interval set to 5 seconds * max retries set to 20 (so 1mn 40s)
borisdayma
https://github.com/huggingface/datasets/pull/3068
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3068", "html_url": "https://github.com/huggingface/datasets/pull/3068", "diff_url": "https://github.com/huggingface/datasets/pull/3068.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3068.patch", "merged_at": "2021-10-13T09:25...
true
1,024,023,185
3,067
add story_cloze
closed
[ "Thanks for pushing this dataset :)\r\n\r\nAccording to the CI, the file `cloze_test_val__spring2016 - cloze_test_ALL_val.csv` is missing in the dummy data zip file (the zip files seem empty). Feel free to add this file with 4-5 lines and it should be good\r\n\r\nAnd you can fix the YAML tags with\r\n```yaml\r\npre...
2021-10-12T16:36:53
2021-10-13T13:48:13
2021-10-13T13:48:13
null
zaidalyafeai
https://github.com/huggingface/datasets/pull/3067
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3067", "html_url": "https://github.com/huggingface/datasets/pull/3067", "diff_url": "https://github.com/huggingface/datasets/pull/3067.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3067.patch", "merged_at": "2021-10-13T13:48...
true
1,024,005,311
3,066
Add iter_archive
closed
[]
2021-10-12T16:17:16
2022-09-21T14:10:10
2021-10-18T09:12:46
Added the `iter_archive` method for the StreamingDownloadManager. It was already implemented in the regular DownloadManager. Now it can be used to stream from TAR archives as mentioned in https://github.com/huggingface/datasets/issues/2829 I also updated the `food101` dataset as an example. Any image/audio data...
lhoestq
https://github.com/huggingface/datasets/pull/3066
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3066", "html_url": "https://github.com/huggingface/datasets/pull/3066", "diff_url": "https://github.com/huggingface/datasets/pull/3066.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3066.patch", "merged_at": "2021-10-18T09:12...
true
1,023,951,322
3,065
Fix test command after refac
closed
[]
2021-10-12T15:23:30
2021-10-12T15:28:47
2021-10-12T15:28:46
Fix the `datasets-cli` test command after the `prepare_module` change in #2986
lhoestq
https://github.com/huggingface/datasets/pull/3065
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3065", "html_url": "https://github.com/huggingface/datasets/pull/3065", "diff_url": "https://github.com/huggingface/datasets/pull/3065.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3065.patch", "merged_at": "2021-10-12T15:28...
true
1,023,900,075
3,064
Make `interleave_datasets` more robust
open
[ "Hi @lhoestq Any response on this issue?", "Hi ! Sorry for the late response\r\n\r\nI agree `interleave_datasets` would benefit a lot from having more flexibility. If I understand correctly it would be nice to be able to define stopping strategies like `stop=\"first_exhausted\"` (default) or `stop=\"all_exhauste...
2021-10-12T14:34:53
2022-07-30T08:47:26
null
**Is your feature request related to a problem? Please describe.** Right now there are few hiccups using `interleave_datasets`. Interleaved dataset iterates until the smallest dataset completes it's iterator. In this way larger datasets may not complete full epoch of iteration. It creates new problems in calculation...
sbmaruf
https://github.com/huggingface/datasets/issues/3064
null
false
1,023,588,297
3,063
Windows CI is unable to test streaming properly because of SSL issues
closed
[ "I think this problem is already fixed:\r\n```python\r\nIn [4]: import fsspec\r\n ...:\r\n ...: url = \"https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes\"\r\n ...:\r\n ...: fsspec.open(url).open()\r\nOut[4]: <File-like object HTTP...
2021-10-12T09:33:40
2022-08-24T14:59:29
2022-08-24T14:59:29
In https://github.com/huggingface/datasets/pull/3041 the windows tests were skipped because of SSL issues with moon-staging.huggingface.co:443 The issue appears only on windows with asyncio. On Linux it works. With requests it works as well. And with the production environment huggingface.co it also works. to rep...
lhoestq
https://github.com/huggingface/datasets/issues/3063
null
false
1,023,209,592
3,062
Update summary on PyPi beyond NLP
closed
[]
2021-10-11T23:27:46
2021-10-13T08:55:54
2021-10-13T08:55:54
More than just NLP now
thomwolf
https://github.com/huggingface/datasets/pull/3062
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3062", "html_url": "https://github.com/huggingface/datasets/pull/3062", "diff_url": "https://github.com/huggingface/datasets/pull/3062.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3062.patch", "merged_at": "2021-10-13T08:55...
true
1,023,103,119
3,061
Feature request : add leave=True to dataset.map to enable tqdm nested bars (and whilst we're at it couldn't we get a way to access directly tqdm underneath?)
open
[ "@lhoestq, @albertvillanova can we have `**tqdm_kwargs` in `map`? If there are any fields that are important to our tqdm (like iterable or unit), we can pop them before initialising the tqdm object so as to avoid duplicity.", "Hi ! Sounds like a good idea :)\r\n\r\nAlso I think it would be better to have this as ...
2021-10-11T20:49:49
2021-10-22T09:34:10
null
**A clear and concise description of what you want to happen.** It would be so nice to be able to nest HuggingFace `Datasets.map() ` progress bars in the grander scheme of things and whilst we're at it why not other functions. **Describe alternatives you've considered** By the way is there not a way to directl...
BenoitDalFerro
https://github.com/huggingface/datasets/issues/3061
null
false
1,022,936,396
3,060
load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached"
closed
[ "Hi @RylanSchaeffer, thanks for reporting.\r\n\r\nI'm sorry, but I was not able to reproduce your problem.\r\n\r\nNormally, the reason for this type of error is that, during your download of the data files, this was not fully complete.\r\n\r\nCould you please try to load the dataset again but forcing its redownload...
2021-10-11T17:05:27
2021-10-28T05:52:21
2021-10-28T05:52:21
## Describe the bug When I try `load_dataset('openwebtext')`, I receive a "EOFError: Compressed file ended before the end-of-stream marker was reached" error. ## Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('openwebtext') ``` ## Expected results I expect the `datas...
RylanSchaeffer
https://github.com/huggingface/datasets/issues/3060
null
false
1,022,620,057
3,059
Fix task reloading from cache
closed
[]
2021-10-11T12:03:04
2021-10-11T12:23:39
2021-10-11T12:23:39
When reloading a dataset from the cache when doing `map`, the tasks templates were kept instead of being updated regarding the output of the `map` function. This is an issue because we drop the tasks templates that are not compatible anymore after `map`, for example if a column of the template was removed. This PR f...
lhoestq
https://github.com/huggingface/datasets/pull/3059
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3059", "html_url": "https://github.com/huggingface/datasets/pull/3059", "diff_url": "https://github.com/huggingface/datasets/pull/3059.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3059.patch", "merged_at": "2021-10-11T12:23...
true
1,022,612,664
3,058
Dataset wikipedia and Bookcorpusopen cannot be fetched from dataloader.
closed
[ "Hi ! I think this issue is more related to the `transformers` project. Could you open an issue on https://github.com/huggingface/transformers ?\r\n\r\nAnyway I think the issue could be that both wikipedia and bookcorpusopen have an additional \"title\" column, contrary to wikitext which only has a \"text\" column....
2021-10-11T11:54:59
2022-01-19T14:03:49
2022-01-19T14:03:49
## Describe the bug I have used the previous version of `transformers` and `datasets`. The dataset `wikipedia` can be successfully used. Recently, I upgrade them to the newest version and find it raises errors. I also tried other datasets. The `wikitext` works and the `bookcorpusopen` raises the same errors as `wikipe...
hobbitlzy
https://github.com/huggingface/datasets/issues/3058
null
false
1,022,508,315
3,057
Error in per class precision computation
closed
[ "Hi @tidhamecha2, thanks for reporting.\r\n\r\nIndeed, we fixed this issue just one week ago: #3008\r\n\r\nThe fix will be included in our next version release.\r\n\r\nIn the meantime, you can incorporate the fix by installing `datasets` from the master branch:\r\n```\r\npip install -U git+ssh://[email protected]/hugg...
2021-10-11T10:05:19
2021-10-11T10:17:44
2021-10-11T10:16:16
## Describe the bug When trying to get the per class precision values by providing `average=None`, following error is thrown `ValueError: can only convert an array of size 1 to a Python scalar` ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric precision_metric = load_metric("...
tidhamecha2
https://github.com/huggingface/datasets/issues/3057
null
false
1,022,345,564
3,056
Fix meteor metric for version >= 3.6.4
closed
[]
2021-10-11T07:11:44
2021-10-11T07:29:20
2021-10-11T07:29:19
After `nltk` update, the meteor metric expects pre-tokenized inputs (breaking change). This PR fixes this issue, while maintaining compatibility with older versions.
albertvillanova
https://github.com/huggingface/datasets/pull/3056
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3056", "html_url": "https://github.com/huggingface/datasets/pull/3056", "diff_url": "https://github.com/huggingface/datasets/pull/3056.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3056.patch", "merged_at": "2021-10-11T07:29...
true
1,022,319,238
3,055
CI test suite fails after meteor metric update
closed
[]
2021-10-11T06:37:12
2021-10-11T07:30:31
2021-10-11T07:30:31
## Describe the bug CI test suite fails: https://app.circleci.com/pipelines/github/huggingface/datasets/8110/workflows/f059ba43-9154-4632-bebb-82318447ddc9/jobs/50010 Stack trace: ``` ___________________ LocalMetricTest.test_load_metric_meteor ____________________ [gw1] linux -- Python 3.6.15 /home/circleci/.pye...
albertvillanova
https://github.com/huggingface/datasets/issues/3055
null
false
1,022,108,186
3,054
Update Biosses
closed
[]
2021-10-10T22:25:12
2021-10-13T09:04:27
2021-10-13T09:04:27
Fix variable naming
bwang482
https://github.com/huggingface/datasets/pull/3054
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3054", "html_url": "https://github.com/huggingface/datasets/pull/3054", "diff_url": "https://github.com/huggingface/datasets/pull/3054.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3054.patch", "merged_at": "2021-10-13T09:04...
true
1,022,076,905
3,053
load_dataset('the_pile_openwebtext2') produces ArrowInvalid, value too large to fit in C integer type
closed
[ "I encountered the same bug using different datasets.\r\nany suggestions?", "+1, can reproduce here!", "I get the same error\r\nPlatform: Windows 10\r\nPython: python 3.8.8\r\nPyArrow: 5.0", "I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this i...
2021-10-10T19:55:21
2023-02-24T14:02:20
2023-02-24T14:02:20
## Describe the bug When loading `the_pile_openwebtext2`, we get the error `pyarrow.lib.ArrowInvalid: Value 2111 too large to fit in C integer type` ## Steps to reproduce the bug ```python import datasets ds = datasets.load_dataset('the_pile_openwebtext2') ``` ## Expected results Should download the dataset...
davidbau
https://github.com/huggingface/datasets/issues/3053
null
false
1,021,944,435
3,052
load_dataset cannot download the data and hangs on forever if cache dir specified
closed
[ "Issue was environment inconsistency, updating packages did the trick\r\n\r\n`conda install -c huggingface -c conda-forge datasets`\r\n\r\n> Collecting package metadata (current_repodata.json): done\r\n> Solving environment: |\r\n> The environment is inconsistent, please check the package plan carefully\r\n> The fo...
2021-10-10T10:31:36
2021-10-11T10:57:09
2021-10-11T10:56:36
## Describe the bug After updating datasets, a code that ran just fine for ages began to fail. Specifying _datasets.load_dataset_'s _cache_dir_ optional argument on Windows 10 machine results in data download to hang on forever. Same call without cache_dir works just fine. Surprisingly exact same code just runs perfec...
BenoitDalFerro
https://github.com/huggingface/datasets/issues/3052
null
false
1,021,852,234
3,051
Non-Matching Checksum Error with crd3 dataset
closed
[ "I got the same error for another dataset (`multi_woz_v22`):\r\n\r\n```\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/...
2021-10-10T01:32:43
2022-03-15T15:54:26
2022-03-15T15:54:26
## Describe the bug When I try loading the crd3 dataset (https://huggingface.co/datasets/crd3), an error is thrown. ## Steps to reproduce the bug ```python dataset = load_dataset('crd3', split='train') ``` ## Expected results I expect no error to be thrown. ## Actual results A non-matching checksum err...
RylanSchaeffer
https://github.com/huggingface/datasets/issues/3051
null
false
1,021,772,622
3,050
Fix streaming: catch Timeout error
closed
[ "I'm running a large test.\r\nLet's see if I get any error within a few days.", "This time it stopped after 8h but correctly raised `ConnectionError: Server Disconnected`.\r\n\r\nTraceback:\r\n```\r\nTraceback (most recent call last): ...
2021-10-09T18:19:20
2021-10-12T15:28:18
2021-10-11T09:35:38
Catches Timeout error during streaming. fix #3049
borisdayma
https://github.com/huggingface/datasets/pull/3050
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3050", "html_url": "https://github.com/huggingface/datasets/pull/3050", "diff_url": "https://github.com/huggingface/datasets/pull/3050.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3050.patch", "merged_at": "2021-10-11T09:35...
true
1,021,770,008
3,049
TimeoutError during streaming
closed
[]
2021-10-09T18:06:51
2021-10-11T09:35:38
2021-10-11T09:35:38
## Describe the bug I got a TimeoutError after streaming for about 10h. ## Steps to reproduce the bug Very long code but we could do a test of streaming indefinitely data, though error may take a while to appear. ## Expected results This error was not expected in the code which considers only `ClientError` but...
borisdayma
https://github.com/huggingface/datasets/issues/3049
null
false
1,021,765,661
3,048
Identify which shard data belongs to
open
[ "Independently of this I think it raises the need to allow multiprocessing during streaming so that we get samples from multiple shards in one batch." ]
2021-10-09T17:46:35
2021-10-09T20:24:17
null
**Is your feature request related to a problem? Please describe.** I'm training on a large dataset made of multiple sub-datasets. During training I can observe some jumps in loss which may correspond to different shards. ![image](https://user-images.githubusercontent.com/715491/136668758-521263aa-a9b2-4ad2-8d22-...
borisdayma
https://github.com/huggingface/datasets/issues/3048
null
false
1,021,360,616
3,047
Loading from cache a dataset for LM built from a text classification dataset sometimes errors
closed
[ "This has been fixed in 1.15, let me know if you still have this issue" ]
2021-10-08T18:23:11
2021-11-03T17:13:08
2021-11-03T17:13:08
## Describe the bug Yes, I know, that description sucks. So the problem is arising in the course when we build a masked language modeling dataset using the IMDB dataset. To reproduce (or try since it's a bit fickle). Create a dataset for masled-language modeling from the IMDB dataset. ```python from datasets ...
sgugger
https://github.com/huggingface/datasets/issues/3047
null
false
1,021,021,368
3,046
Fix MedDialog metadata JSON
closed
[]
2021-10-08T12:04:40
2021-10-11T07:46:43
2021-10-11T07:46:42
Fix #2969.
albertvillanova
https://github.com/huggingface/datasets/pull/3046
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3046", "html_url": "https://github.com/huggingface/datasets/pull/3046", "diff_url": "https://github.com/huggingface/datasets/pull/3046.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3046.patch", "merged_at": "2021-10-11T07:46...
true
1,020,968,704
3,045
Fix inconsistent caching behaviour in Dataset.map() with multiprocessing #3044
closed
[ "Hi ! Thanks for noticing this inconsistence and suggesting a fix :)\r\n\r\nIf I understand correctly you try to pass the same fingerprint to each processed shard of the dataset. This can be an issue since each shard is actually a different dataset with different data: they shouldn't have the same fingerprint.\r\n\...
2021-10-08T10:59:21
2021-10-21T16:58:32
2021-10-21T14:22:44
Fix #3044 1. A rough unit test that fails without the fix. It probably doesn't comply with your code standards, but that just to draft the idea. 2. A one liner fix
vlievin
https://github.com/huggingface/datasets/pull/3045
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3045", "html_url": "https://github.com/huggingface/datasets/pull/3045", "diff_url": "https://github.com/huggingface/datasets/pull/3045.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3045.patch", "merged_at": null }
true
1,020,869,778
3,044
Inconsistent caching behaviour when using `Dataset.map()` with a `new_fingerprint` and `num_proc>1`
open
[ "Following the discussion in #3045 if would be nice to have a way to let users have a nice experience with caching even if the function is not hashable.\r\n\r\nCurrently a workaround is to make the function picklable. This can be done by implementing a callable class instead, that can be pickled using by implementi...
2021-10-08T09:07:10
2025-03-04T07:16:00
null
## Describe the bug Caching does not work when using `Dataset.map()` with: 1. a function that cannot be deterministically fingerprinted 2. `num_proc>1` 3. using a custom fingerprint set with the argument `new_fingerprint`. This means that the dataset will be mapped with the function for each and every call, w...
vlievin
https://github.com/huggingface/datasets/issues/3044
null
false
1,020,252,114
3,043
Add PASS dataset
closed
[]
2021-10-07T16:43:43
2022-01-20T16:50:47
2022-01-20T16:50:47
## Adding a Dataset - **Name:** PASS - **Description:** An ImageNet replacement for self-supervised pretraining without humans - **Data:** https://www.robots.ox.ac.uk/~vgg/research/pass/ https://github.com/yukimasano/PASS Instructions to add a new dataset can be found [here](https://github.com/huggingface/dataset...
osanseviero
https://github.com/huggingface/datasets/issues/3043
null
false
1,020,047,289
3,042
Improving elasticsearch integration
open
[ "@lhoestq @albertvillanova Iwas trying to fix the failing tests in circleCI but is there a test elasticsearch instance somewhere? If not, can I launch a docker container to have one?" ]
2021-10-07T13:28:35
2022-07-06T15:19:48
null
- adding murmurhash signature to sample in index - adding optional credentials for remote elasticsearch server - enabling sample update in index - upgrade the elasticsearch 7.10.1 python client - adding ElasticsearchBulider to instantiate a dataset from an index and a filtering query
ggdupont
https://github.com/huggingface/datasets/pull/3042
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3042", "html_url": "https://github.com/huggingface/datasets/pull/3042", "diff_url": "https://github.com/huggingface/datasets/pull/3042.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3042.patch", "merged_at": null }
true
1,018,911,385
3,041
Load private data files + use glob on ZIP archives for json/csv/etc. module inference
closed
[ "I have an error on windows:\r\n```python\r\naiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host moon-staging.huggingface.co:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')]\r\n```\r\nat th...
2021-10-06T18:16:36
2021-10-12T15:25:48
2021-10-12T15:25:46
As mentioned in https://github.com/huggingface/datasets/issues/3032 loading data files from private repository isn't working correctly because of the data files resolved. #2986 did a refactor of the data files resolver. I added authentication to it. I also improved it to glob inside ZIP archives to look for json/...
lhoestq
https://github.com/huggingface/datasets/pull/3041
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3041", "html_url": "https://github.com/huggingface/datasets/pull/3041", "diff_url": "https://github.com/huggingface/datasets/pull/3041.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3041.patch", "merged_at": "2021-10-12T15:25...
true
1,018,782,475
3,040
[save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset
closed
[ "Hi,\r\n\r\nthe `save_to_disk` docstring explains that `flatten_indices` has to be called on a dataset before saving it to save only the shard/slice of the dataset.", "That works! Thansk!\r\n\r\nMight be worth doing that automatically actually in case the `save_to_disk` is called on a dataset that has an indices ...
2021-10-06T17:08:47
2021-11-02T15:41:08
2021-11-02T15:41:08
## Describe the bug When only keeping a dummy size of a dataset (say the first 100 samples), and then saving it to disk to upload it in the following to the hub for easy demo/use - not just the small dataset is saved but the whole dataset with an indices file. The problem with this is that the dataset is still very...
patrickvonplaten
https://github.com/huggingface/datasets/issues/3040
null
false
1,018,219,800
3,039
Add sberquad dataset
closed
[]
2021-10-06T12:32:02
2021-10-13T10:19:11
2021-10-13T10:16:04
null
Alenush
https://github.com/huggingface/datasets/pull/3039
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3039", "html_url": "https://github.com/huggingface/datasets/pull/3039", "diff_url": "https://github.com/huggingface/datasets/pull/3039.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3039.patch", "merged_at": "2021-10-13T10:16...
true
1,018,113,499
3,038
add sberquad dataset
closed
[]
2021-10-06T11:33:39
2021-10-06T11:58:01
2021-10-06T11:58:01
null
Alenush
https://github.com/huggingface/datasets/pull/3038
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3038", "html_url": "https://github.com/huggingface/datasets/pull/3038", "diff_url": "https://github.com/huggingface/datasets/pull/3038.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3038.patch", "merged_at": null }
true
1,018,091,919
3,037
SberQuad
closed
[]
2021-10-06T11:21:08
2021-10-06T11:33:08
2021-10-06T11:33:08
null
Alenush
https://github.com/huggingface/datasets/pull/3037
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3037", "html_url": "https://github.com/huggingface/datasets/pull/3037", "diff_url": "https://github.com/huggingface/datasets/pull/3037.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3037.patch", "merged_at": null }
true
1,017,687,944
3,036
Protect master branch to force contributions via Pull Requests
closed
[ "It would be nice to protect the master from direct commits, but still having a way to merge our own PRs when no review is required (for example when updating a dataset_infos.json file, or minor bug fixes - things that happen quite often actually).\r\nDo you know if there's a way ?", "you can if you're an admin o...
2021-10-06T07:34:17
2021-10-07T06:51:47
2021-10-07T06:49:52
In order to have a clearer Git history in the master branch, I propose to protect it so that all contributions must be done through a Pull Request and no direct commits to master are allowed. - The Pull Request allows to give context, discuss any potential issues and improve the quality of the contribution - The Pull...
albertvillanova
https://github.com/huggingface/datasets/issues/3036
null
false
1,016,770,071
3,035
`load_dataset` does not work with uploaded arrow file
open
[ "Hi ! This is not a bug, this is simply not implemented.\r\n`save_to_disk` is for on-disk serialization and was not made compatible for the Hub.\r\nThat being said, I agree we actually should make it work with the Hub x)", "cc @LysandreJik maybe we can solve this at the same time as adding `push_to_hub`" ]
2021-10-05T20:15:10
2021-10-06T17:01:37
null
## Describe the bug I've preprocessed and uploaded a dataset here: https://huggingface.co/datasets/ami-wav2vec2/ami_headset_single_preprocessed . The dataset is in `.arrow` format. The dataset can correctly be loaded when doing: ```bash git lfs install git clone https://huggingface.co/datasets/ami-wav2vec2/a...
patrickvonplaten
https://github.com/huggingface/datasets/issues/3035
null
false
1,016,759,202
3,034
Errors loading dataset using fs = a gcsfs.GCSFileSystem
open
[]
2021-10-05T20:07:08
2021-10-05T20:26:39
null
## Describe the bug Cannot load dataset using a `gcsfs.GCSFileSystem`. I'm not sure if this should be a bug in `gcsfs` or here... Basically what seems to be happening is that since datasets saves datasets as folders and folders aren't "real objects" in gcs, gcsfs raises a 404 error. There are workarounds if you...
dconatha
https://github.com/huggingface/datasets/issues/3034
null
false
1,016,619,572
3,033
Actual "proper" install of ruamel.yaml in the windows CI
closed
[]
2021-10-05T17:52:07
2021-10-05T17:54:57
2021-10-05T17:54:57
It was impossible to update the package directly with `pip`. Indeed it was installed with `distutils` which prevents `pip` or `conda` to uninstall it. I had to `rm` a directory from the `site-packages` python directory, and then do `pip install ruamel.yaml` It's not that "proper" but I couldn't find better soluti...
lhoestq
https://github.com/huggingface/datasets/pull/3033
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3033", "html_url": "https://github.com/huggingface/datasets/pull/3033", "diff_url": "https://github.com/huggingface/datasets/pull/3033.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3033.patch", "merged_at": "2021-10-05T17:54...
true
1,016,488,475
3,032
Error when loading private dataset with "data_files" arg
closed
[ "We'll do a release tomorrow or on wednesday to make the fix available :)\r\n\r\nThanks for reproting !" ]
2021-10-05T15:46:27
2021-10-12T15:26:22
2021-10-12T15:25:46
## Describe the bug A clear and concise description of what the bug is. Private datasets with no loading script can't be loaded using `data_files` parameter. ## Steps to reproduce the bug ```python from datasets import load_dataset data_files = {"train": "**/train/*/*.jsonl", "valid": "**/valid/*/*.jsonl"} d...
borisdayma
https://github.com/huggingface/datasets/issues/3032
null
false
1,016,458,496
3,031
Align tqdm control with cache control
closed
[ "Could you add this function to the documentation please ?\r\n\r\nYou can add it in `main_classes.rst`, and maybe add a `Tip` section in the `map` section in the `process.rst`" ]
2021-10-05T15:18:49
2021-10-18T15:00:21
2021-10-18T14:59:30
Currently, once disabled with `disable_progress_bar`, progress bars cannot be re-enabled again. To overcome this limitation, this PR introduces the `set_progress_bar_enabled` function that accepts a boolean indicating whether to display progress bars. The goal is to provide a similar API to the existing cache control A...
mariosasko
https://github.com/huggingface/datasets/pull/3031
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3031", "html_url": "https://github.com/huggingface/datasets/pull/3031", "diff_url": "https://github.com/huggingface/datasets/pull/3031.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3031.patch", "merged_at": "2021-10-18T14:59...
true
1,016,435,324
3,030
Add `remove_columns` to `IterableDataset`
closed
[ "Thanks ! That looks all good :)\r\n\r\nI don't think that batching would help. Indeed we're dealing with python iterators that yield elements one by one, so batched `map` needs to accumulate a batch, apply the function, and then yield examples from the batch.\r\n\r\nThough once we have parallel processing in `map`...
2021-10-05T14:58:33
2021-10-08T15:33:15
2021-10-08T15:31:53
Fixes #2944 WIP * Not tested yet. * We might want to allow batched remove for efficiency. @lhoestq Do you think it should have `batched=` and `batch_size=`?
changjonathanc
https://github.com/huggingface/datasets/pull/3030
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3030", "html_url": "https://github.com/huggingface/datasets/pull/3030", "diff_url": "https://github.com/huggingface/datasets/pull/3030.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3030.patch", "merged_at": "2021-10-08T15:31...
true
1,016,389,901
3,029
Use standard open-domain validation split in nq_open
closed
[ "I had to run datasets-cli with --ignore_verifications the first time since it was complaining about a missing file, but now it runs without that flag fine. I moved dummy_data.zip to the new folder, but also had to modify the filename of the test file in the zip (should I not have done that?). Finally, I added the ...
2021-10-05T14:19:27
2021-10-05T14:56:46
2021-10-05T14:56:45
The nq_open dataset originally drew the validation set from this file: https://github.com/google-research-datasets/natural-questions/blob/master/nq_open/NQ-open.efficientqa.dev.1.1.sample.jsonl However, that's the dev set used specifically and only for the efficientqa competition, and it's not the same dev set as is ...
craffel
https://github.com/huggingface/datasets/pull/3029
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3029", "html_url": "https://github.com/huggingface/datasets/pull/3029", "diff_url": "https://github.com/huggingface/datasets/pull/3029.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3029.patch", "merged_at": "2021-10-05T14:56...
true
1,016,230,272
3,028
Properly install ruamel-yaml for windows CI
closed
[ "@lhoestq I would say this does not \"properly\" install `ruamel-yaml`, but the contrary, you overwrite the previous version without desinstalling it first.\r\n\r\nAccording to `pip` docs:\r\n> This can break your system if the existing package is of a different version or was installed with a different package ma...
2021-10-05T11:51:15
2021-10-05T14:02:12
2021-10-05T11:51:22
null
lhoestq
https://github.com/huggingface/datasets/pull/3028
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3028", "html_url": "https://github.com/huggingface/datasets/pull/3028", "diff_url": "https://github.com/huggingface/datasets/pull/3028.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3028.patch", "merged_at": "2021-10-05T11:51...
true
1,016,150,117
3,027
Resolve data_files by split name
closed
[ "Awesome @lhoestq I like the proposal and it works great on my JSON community dataset. Here is the [log](https://gist.github.com/vblagoje/714babc325bcbdd5de579fd8e1648892). ", "From my discussion with @borisdayma it would be more general the files match if their paths contains the split name - not only if the fil...
2021-10-05T10:24:36
2021-11-05T17:49:58
2021-11-05T17:49:57
This issue is about discussing the default behavior when someone loads a dataset that consists in data files. For example: ```python load_dataset("lhoestq/demo1") ``` should return two splits "train" and "test" since the dataset repostiory is like ``` data/ ├── train.csv └── test.csv ``` Currently it returns ...
lhoestq
https://github.com/huggingface/datasets/issues/3027
null
false
1,016,067,794
3,026
added arxiv paper inswiss_judgment_prediction dataset card
closed
[]
2021-10-05T09:02:01
2021-10-08T16:01:44
2021-10-08T16:01:24
null
JoelNiklaus
https://github.com/huggingface/datasets/pull/3026
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3026", "html_url": "https://github.com/huggingface/datasets/pull/3026", "diff_url": "https://github.com/huggingface/datasets/pull/3026.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3026.patch", "merged_at": "2021-10-08T16:01...
true
1,016,061,222
3,025
Fix Windows test suite
closed
[]
2021-10-05T08:55:22
2021-10-05T09:58:28
2021-10-05T09:58:27
Try a hotfix to restore Windows test suite. Fix #3024.
albertvillanova
https://github.com/huggingface/datasets/pull/3025
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3025", "html_url": "https://github.com/huggingface/datasets/pull/3025", "diff_url": "https://github.com/huggingface/datasets/pull/3025.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3025.patch", "merged_at": "2021-10-05T09:58...
true
1,016,052,911
3,024
Windows test suite fails
closed
[]
2021-10-05T08:46:46
2021-10-05T09:58:27
2021-10-05T09:58:27
## Describe the bug There is an error during installation of tests dependencies for Windows: https://app.circleci.com/pipelines/github/huggingface/datasets/7981/workflows/9b6a0114-2b8e-4069-94e5-e844dbbdba4e/jobs/49206 ``` ERROR: Cannot uninstall 'ruamel-yaml'. It is a distutils installed project and thus we can...
albertvillanova
https://github.com/huggingface/datasets/issues/3024
null
false
1,015,923,031
3,023
Fix typo
closed
[]
2021-10-05T06:06:11
2021-10-05T11:56:55
2021-10-05T11:56:55
null
qqaatw
https://github.com/huggingface/datasets/pull/3023
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3023", "html_url": "https://github.com/huggingface/datasets/pull/3023", "diff_url": "https://github.com/huggingface/datasets/pull/3023.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3023.patch", "merged_at": "2021-10-05T11:56...
true
1,015,750,221
3,022
MeDAL dataset: Add further description and update download URL
closed
[ "@lhoestq I'm a bit confused by the error message. I haven't touched the YAML code at all - do you have any insight on that?", "I just added the missing `pretty_name` tag in the YAML - sorry about that ;)", "Thanks! Seems like it did the trick since the tests are passing. Let me know if there's anything else I ...
2021-10-05T00:13:28
2021-10-13T09:03:09
2021-10-13T09:03:09
Added more details in the following sections: * Dataset Structure * Data Instances * Data Splits * Source Data * Annotations * Discussions of Biases * LIcensing Information
xhluca
https://github.com/huggingface/datasets/pull/3022
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3022", "html_url": "https://github.com/huggingface/datasets/pull/3022", "diff_url": "https://github.com/huggingface/datasets/pull/3022.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3022.patch", "merged_at": "2021-10-13T09:03...
true
1,015,444,094
3,021
Support loading dataset from multiple zipped CSV data files
closed
[]
2021-10-04T17:33:57
2021-10-06T08:36:46
2021-10-06T08:36:45
Fix partially #3018. CC: @lewtun
albertvillanova
https://github.com/huggingface/datasets/pull/3021
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3021", "html_url": "https://github.com/huggingface/datasets/pull/3021", "diff_url": "https://github.com/huggingface/datasets/pull/3021.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3021.patch", "merged_at": "2021-10-06T08:36...
true
1,015,406,105
3,020
Add a metric for the MATH dataset (competition_math).
closed
[ "I believe the only failed test related to this PR is tests/test_metric_common.py::LocalMetricTest::test_load_metric_competition_math. It gives the following error:\r\n\r\nImportError: To be able to use this dataset, you need to install the following dependencies['math_equivalence'] using 'pip install git+https://g...
2021-10-04T16:52:16
2021-10-22T10:29:31
2021-10-22T10:29:31
This metric computes accuracy for the MATH dataset (https://arxiv.org/abs/2103.03874) after canonicalizing the prediction and the reference (e.g., converting "1/2" to "\\\\frac{1}{2}").
hacobe
https://github.com/huggingface/datasets/pull/3020
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3020", "html_url": "https://github.com/huggingface/datasets/pull/3020", "diff_url": "https://github.com/huggingface/datasets/pull/3020.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3020.patch", "merged_at": "2021-10-22T10:29...
true
1,015,339,983
3,019
Fix filter leaking
closed
[]
2021-10-04T15:42:58
2022-06-03T08:28:14
2021-10-05T08:33:07
If filter is called after using a first transform `shuffle`, `select`, `shard`, `train_test_split`, or `filter`, then it could not work as expected and return examples from before the first transform. This is because the indices mapping was not taken into account when saving the indices to keep when doing the filtering...
lhoestq
https://github.com/huggingface/datasets/pull/3019
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3019", "html_url": "https://github.com/huggingface/datasets/pull/3019", "diff_url": "https://github.com/huggingface/datasets/pull/3019.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3019.patch", "merged_at": "2021-10-05T08:33...
true
1,015,311,877
3,018
Support multiple zipped CSV data files
open
[ "@lhoestq I would like to draw your attention to the proposed API by @lewtun, using `data_dir` to pass the ZIP URL.\r\n\r\nI'm not totally convinced with this... What do you think?\r\n\r\nMaybe we could discuss other approaches...\r\n\r\nOne brainstorming idea: what about using URL chaining with the hop operator in...
2021-10-04T15:16:59
2021-10-05T14:32:57
null
As requested by @lewtun, support loading multiple zipped CSV data files. ```python from datasets import load_dataset url = "https://domain.org/filename.zip" data_files = {"train": "train_filename.csv", "test": "test_filename.csv"} dataset = load_dataset("csv", data_dir=url, data_files=data_files) ```
albertvillanova
https://github.com/huggingface/datasets/issues/3018
null
false
1,015,215,528
3,017
Remove unused parameter in xdirname
closed
[]
2021-10-04T13:55:53
2021-10-05T11:37:01
2021-10-05T11:37:00
Minor fix to remove unused args `*p` in `xdirname`.
albertvillanova
https://github.com/huggingface/datasets/pull/3017
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3017", "html_url": "https://github.com/huggingface/datasets/pull/3017", "diff_url": "https://github.com/huggingface/datasets/pull/3017.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3017.patch", "merged_at": "2021-10-05T11:37...
true
1,015,208,654
3,016
Fix Windows paths in LJ Speech dataset
closed
[]
2021-10-04T13:49:37
2021-10-04T15:23:05
2021-10-04T15:23:04
Minor fix in LJ Speech dataset for Windows pathname component separator. Related to #1878.
albertvillanova
https://github.com/huggingface/datasets/pull/3016
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3016", "html_url": "https://github.com/huggingface/datasets/pull/3016", "diff_url": "https://github.com/huggingface/datasets/pull/3016.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3016.patch", "merged_at": "2021-10-04T15:23...
true
1,015,130,845
3,015
Extend support for streaming datasets that use glob.glob
closed
[]
2021-10-04T12:42:37
2021-10-05T13:46:39
2021-10-05T13:46:38
This PR extends the support in streaming mode for datasets that use `glob`, by patching the function `glob.glob`. Related to #2880, #2876, #2874
albertvillanova
https://github.com/huggingface/datasets/pull/3015
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3015", "html_url": "https://github.com/huggingface/datasets/pull/3015", "diff_url": "https://github.com/huggingface/datasets/pull/3015.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3015.patch", "merged_at": "2021-10-05T13:46...
true
1,015,070,751
3,014
Fix Windows path in MATH dataset
closed
[]
2021-10-04T11:41:07
2021-10-04T12:46:44
2021-10-04T12:46:44
Minor fix in MATH dataset for Windows pathname component separator. Related to #2982.
albertvillanova
https://github.com/huggingface/datasets/pull/3014
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3014", "html_url": "https://github.com/huggingface/datasets/pull/3014", "diff_url": "https://github.com/huggingface/datasets/pull/3014.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3014.patch", "merged_at": "2021-10-04T12:46...
true
1,014,960,419
3,013
Improve `get_dataset_infos`?
closed
[ "To keeps things simple maybe we should use `load_dataset_builder` in `get_dataset_infos`.\r\n`load_dataset_builder` instantiates a builder and runs the _infos() method in order to give you the most up-to-date infos, even if the dataset_infos.json is outdated or missing." ]
2021-10-04T09:47:04
2022-02-21T15:57:10
2022-02-21T15:57:10
Using the dedicated function `get_dataset_infos` on a dataset that has no dataset-info.json file returns an empty info: ``` >>> from datasets import get_dataset_infos >>> get_dataset_infos('wit') {} ``` While it's totally possible to get it (regenerate it) with: ``` >>> from datasets import load_dataset_b...
severo
https://github.com/huggingface/datasets/issues/3013
null
false
1,014,958,931
3,012
Replace item with float in metrics
closed
[]
2021-10-04T09:45:28
2021-10-04T11:30:34
2021-10-04T11:30:33
As pointed out by @mariosasko in #3001, calling `float()` instad of `.item()` is faster. Moreover, it might avoid potential issues if any of the third-party functions eventually returns a `float` instead of an `np.float64`. Related to #3001.
albertvillanova
https://github.com/huggingface/datasets/pull/3012
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3012", "html_url": "https://github.com/huggingface/datasets/pull/3012", "diff_url": "https://github.com/huggingface/datasets/pull/3012.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3012.patch", "merged_at": "2021-10-04T11:30...
true
1,014,935,713
3,011
load_dataset_builder should error if "name" does not exist?
open
[ "Yes I think it should raise an error. Currently it looks like it instantiates a custom configuration with the name given by the user:\r\nhttps://github.com/huggingface/datasets/blob/ba27ce33bf568374cf23a07669fdd875b5718bc2/src/datasets/builder.py#L391-L397" ]
2021-10-04T09:20:46
2022-09-20T13:05:07
null
``` import datasets as ds builder = ds.load_dataset_builder('sent_comp', name="doesnotexist") builder.info.config_name ``` returns ``` 'doesnotexist' ``` Shouldn't it raise an error instead? For this dataset, the only valid values for `name` should be: `"default"` or `None` (ie. argument not passed)
severo
https://github.com/huggingface/datasets/issues/3011
null
false
1,014,918,470
3,010
Chain filtering is leaking
closed
[ "### Update:\r\nI wrote a bit cleaner code snippet (without transforming to json) that can expose leaking.\r\n```python\r\nimport datasets\r\nimport json\r\n\r\nitems = ['ab', 'c', 'df']\r\n\r\nds = datasets.Dataset.from_dict({'col': items})\r\nprint(list(ds))\r\n# > Prints: [{'col': 'ab'}, {'col': 'c'}, {'col': 'd...
2021-10-04T09:04:55
2022-06-01T17:36:44
2022-06-01T17:36:44
## Describe the bug As there's no support for lists within dataset fields, I convert my lists to json-string format. However, the bug described is occurring even when the data format is 'string'. These samples show that filtering behavior diverges from what's expected when chaining filterings. On sample 2 the second...
DrMatters
https://github.com/huggingface/datasets/issues/3010
null
false
1,014,868,235
3,009
Fix Windows paths in SUPERB benchmark datasets
closed
[]
2021-10-04T08:13:49
2021-10-04T13:43:25
2021-10-04T13:43:25
Minor fix in SUPERB benchmark datasets for Windows pathname component separator. Related to #2884, #2783 and #2619.
albertvillanova
https://github.com/huggingface/datasets/pull/3009
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3009", "html_url": "https://github.com/huggingface/datasets/pull/3009", "diff_url": "https://github.com/huggingface/datasets/pull/3009.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3009.patch", "merged_at": "2021-10-04T13:43...
true
1,014,849,163
3,008
Fix precision/recall metrics with None average
closed
[]
2021-10-04T07:54:15
2021-10-04T09:29:37
2021-10-04T09:29:36
Related to issue #2979 and PR #2992.
albertvillanova
https://github.com/huggingface/datasets/pull/3008
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3008", "html_url": "https://github.com/huggingface/datasets/pull/3008", "diff_url": "https://github.com/huggingface/datasets/pull/3008.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3008.patch", "merged_at": "2021-10-04T09:29...
true
1,014,775,450
3,007
Correct a typo
closed
[]
2021-10-04T06:15:47
2021-10-04T09:27:57
2021-10-04T09:27:57
null
Yann21
https://github.com/huggingface/datasets/pull/3007
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3007", "html_url": "https://github.com/huggingface/datasets/pull/3007", "diff_url": "https://github.com/huggingface/datasets/pull/3007.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3007.patch", "merged_at": "2021-10-04T09:27...
true
1,014,770,821
3,006
Fix Windows paths in CommonLanguage dataset
closed
[]
2021-10-04T06:08:58
2021-10-04T09:07:58
2021-10-04T09:07:58
Minor fix in CommonLanguage dataset for Windows pathname component separator. Related to #2989.
albertvillanova
https://github.com/huggingface/datasets/pull/3006
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3006", "html_url": "https://github.com/huggingface/datasets/pull/3006", "diff_url": "https://github.com/huggingface/datasets/pull/3006.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3006.patch", "merged_at": "2021-10-04T09:07...
true
1,014,615,420
3,005
DatasetDict.filter and Dataset.filter crashes with any "fn_kwargs" argument
closed
[ "Hi @DrMatters, thanks for reporting.\r\n\r\nThis issue was fixed 14 days ago: #2950.\r\n\r\nCurrently, the fix is only in the master branch and will be made available in our next library release.\r\n\r\nIn the meantime, you can incorporate the fix by installing datasets from the master branch:\r\n```shell\r\npip i...
2021-10-04T00:49:29
2021-10-11T10:18:01
2021-10-04T08:46:13
## Describe the bug The ".filter" method of DatasetDict or Dataset objects fails when passing any "fn_kwargs" argument ## Steps to reproduce the bug ```python import datasets example_dataset = datasets.Dataset.from_dict({"a": {1, 2, 3, 4}}) def filter_value(example, value): return example['a'] == value...
DrMatters
https://github.com/huggingface/datasets/issues/3005
null
false
1,014,336,617
3,004
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English.
closed
[ "Please wait until Tuesday. Arxiv pre-print is pending. 🤗 ", "Hi @lhoestq, I updated the README with the Arxiv publication info and now the tests are not passing.\r\n\r\nIt seems that the error is completely irrelevant to my code:\r\n\r\n```\r\n Attempting uninstall: ruamel.yaml\r\n Found existing installatio...
2021-10-03T10:03:25
2021-10-13T13:37:02
2021-10-13T13:37:01
Inspired by the recent widespread use of the GLUE multi-task benchmark NLP dataset (Wang et al., 2018), the subsequent more difficult SuperGLUE (Wang et al., 2019), other previous multi-task NLP benchmarks (Conneau and Kiela, 2018; McCann et al., 2018), and similar initiatives in other domains (Peng et al., 2019), we i...
iliaschalkidis
https://github.com/huggingface/datasets/pull/3004
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3004", "html_url": "https://github.com/huggingface/datasets/pull/3004", "diff_url": "https://github.com/huggingface/datasets/pull/3004.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3004.patch", "merged_at": "2021-10-13T13:37...
true
1,014,137,933
3,003
common_language: Fix license in README.md
closed
[]
2021-10-02T18:47:37
2021-10-04T09:27:01
2021-10-04T09:27:01
...it's correct elsewhere
jimregan
https://github.com/huggingface/datasets/pull/3003
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3003", "html_url": "https://github.com/huggingface/datasets/pull/3003", "diff_url": "https://github.com/huggingface/datasets/pull/3003.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3003.patch", "merged_at": "2021-10-04T09:27...
true
1,014,120,524
3,002
Remove a reference to the open Arrow file when deleting a TF dataset created with to_tf_dataset
closed
[ "@lhoestq The test passes even without the try/except block!", "Hey, I'm a little late because I was caught up in the course work, but I double-checked this and it looks great. Thanks for fixing!" ]
2021-10-02T17:44:09
2021-10-13T11:48:00
2021-10-13T09:03:23
This [comment](https://github.com/huggingface/datasets/issues/2934#issuecomment-922970919) explains the issue. This PR fixes that with a `weakref` callback, and additionally: * renames `TensorflowDatasetMixIn` to `TensorflowDatasetMixin` for consistency * correctly indents `TensorflowDatasetMixin`'s docstring * repl...
mariosasko
https://github.com/huggingface/datasets/pull/3002
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3002", "html_url": "https://github.com/huggingface/datasets/pull/3002", "diff_url": "https://github.com/huggingface/datasets/pull/3002.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3002.patch", "merged_at": "2021-10-13T09:03...
true
1,014,024,982
3,001
Fix cast to Python scalar in Matthews Correlation metric
closed
[]
2021-10-02T11:44:59
2021-10-04T09:54:04
2021-10-04T09:26:12
This PR is motivated by issue #2964. The Matthews Correlation metric relies on sklearn's `matthews_corrcoef` function to compute the result. This function returns either `float` or `np.float64` (see the [source](https://github.com/scikit-learn/scikit-learn/blob/844b4be24d20fc42cc13b957374c718956a0db39/sklearn/metric...
mariosasko
https://github.com/huggingface/datasets/pull/3001
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3001", "html_url": "https://github.com/huggingface/datasets/pull/3001", "diff_url": "https://github.com/huggingface/datasets/pull/3001.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3001.patch", "merged_at": "2021-10-04T09:26...
true
1,013,613,219
3,000
Fix json loader when conversion not implemented
closed
[ "And we're already at PR number 3,000 ! ^^", "Thank you so much for fixing this @lhoestq 😍 ! I just tested the branch out and it works like a charm!" ]
2021-10-01T17:47:22
2021-10-01T18:05:00
2021-10-01T17:54:23
Sometimes the arrow json parser fails if the `block_size` is too small and returns an `ArrowNotImplementedError: JSON conversion to struct...` error. By increasing the block size it makes it work again. Hopefully it should help with https://github.com/huggingface/datasets/issues/2799 I tried with the file ment...
lhoestq
https://github.com/huggingface/datasets/pull/3000
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3000", "html_url": "https://github.com/huggingface/datasets/pull/3000", "diff_url": "https://github.com/huggingface/datasets/pull/3000.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3000.patch", "merged_at": "2021-10-01T17:54...
true
1,013,536,933
2,999
Set trivia_qa writer batch size
closed
[]
2021-10-01T16:23:26
2021-10-01T16:34:55
2021-10-01T16:34:55
Save some RAM when generating trivia_qa
lhoestq
https://github.com/huggingface/datasets/pull/2999
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2999", "html_url": "https://github.com/huggingface/datasets/pull/2999", "diff_url": "https://github.com/huggingface/datasets/pull/2999.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2999.patch", "merged_at": "2021-10-01T16:34...
true
1,013,372,871
2,998
cannot shuffle dataset loaded from disk
open
[]
2021-10-01T13:49:52
2021-10-01T13:49:52
null
## Describe the bug dataset loaded from disk cannot be shuffled. ## Steps to reproduce the bug ``` my_dataset = load_from_disk('s3://my_file/validate', fs=s3) sample = my_dataset.select(range(100)).shuffle(seed=1234) ``` ## Actual results ``` sample = my_dataset .select(range(100)).shuffle(seed=1234) ...
pya25
https://github.com/huggingface/datasets/issues/2998
null
false
1,013,270,069
2,997
Dataset has incorrect labels
closed
[ "Hi @marshmellow77, thanks for reporting.\r\n\r\nThat issue is fixed since `datasets` version 1.9.0 (see 16bc665f2753677c765011ef79c84e55486d4347).\r\n\r\nPlease, update `datasets` with: `pip install -U datasets`", "Thanks. Please note that the dataset explorer (https://huggingface.co/datasets/viewer/?dataset=tur...
2021-10-01T12:09:06
2021-10-01T15:32:00
2021-10-01T13:54:34
The dataset https://huggingface.co/datasets/turkish_product_reviews has incorrect labels - all reviews are labelled with "1" (positive sentiment). None of the reviews is labelled with "0". See screenshot attached: ![Capture](https://user-images.githubusercontent.com/63367770/135617428-14ce0b27-5208-4e66-a3ee-71542e3...
heiko-hotz
https://github.com/huggingface/datasets/issues/2997
null
false
1,013,266,373
2,996
Remove all query parameters when extracting protocol
closed
[ "Beware of cases like: `http://ufal.ms.mff.cuni.cz/umc/005-en-ur/download.php?f=umc005-corpus.zip` or `gzip://bg-cs.xml::https://opus.nlpl.eu/download.php?f=Europarl/v8/xml/bg-cs.xml.gz`. I see these URLs in the errors (https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading?collection=@hugging...
2021-10-01T12:05:34
2021-10-04T08:48:13
2021-10-04T08:48:13
Fix `_get_extraction_protocol` to remove all query parameters, like `?raw=true`, `?dl=1`,...
albertvillanova
https://github.com/huggingface/datasets/pull/2996
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2996", "html_url": "https://github.com/huggingface/datasets/pull/2996", "diff_url": "https://github.com/huggingface/datasets/pull/2996.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2996.patch", "merged_at": "2021-10-04T08:48...
true
1,013,143,868
2,995
Fix trivia_qa unfiltered
closed
[ "CI fails due to missing tags, but they will be added in https://github.com/huggingface/datasets/pull/2949" ]
2021-10-01T09:53:43
2021-10-01T10:04:11
2021-10-01T10:04:10
Fix https://github.com/huggingface/datasets/issues/2993
lhoestq
https://github.com/huggingface/datasets/pull/2995
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2995", "html_url": "https://github.com/huggingface/datasets/pull/2995", "diff_url": "https://github.com/huggingface/datasets/pull/2995.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2995.patch", "merged_at": "2021-10-01T10:04...
true
1,013,000,475
2,994
Fix loading compressed CSV without streaming
closed
[]
2021-10-01T07:28:59
2021-10-01T15:53:16
2021-10-01T15:53:16
When implementing support to stream CSV files (https://github.com/huggingface/datasets/commit/ad489d4597381fc2d12c77841642cbeaecf7a2e0#diff-6f60f8d0552b75be8b3bfd09994480fd60dcd4e7eb08d02f721218c3acdd2782), a regression was introduced preventing loading compressed CSV files in non-streaming mode. This PR fixes it, a...
albertvillanova
https://github.com/huggingface/datasets/pull/2994
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2994", "html_url": "https://github.com/huggingface/datasets/pull/2994", "diff_url": "https://github.com/huggingface/datasets/pull/2994.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2994.patch", "merged_at": "2021-10-01T15:53...
true
1,012,702,665
2,993
Can't download `trivia_qa/unfiltered`
closed
[ "wooo that was fast! thank you @lhoestq !\r\nit is able to process now, though it's ignoring all files and ending up with 0 examples now haha :/\r\n\r\nFor subset \"unfiltered\":\r\n```python\r\n>>> load_dataset(\"trivia_qa\", \"unfiltered\")\r\nDownloading and preparing dataset trivia_qa/unfiltered (download: 3.07...
2021-09-30T23:00:18
2021-10-01T19:07:23
2021-10-01T19:07:22
## Describe the bug For some reason, I can't download `trivia_qa/unfilted`. A file seems to be missing... I am able to see it fine though the viewer tough... ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> load_dataset("trivia_qa", "unfiltered") Downloading and preparing data...
VictorSanh
https://github.com/huggingface/datasets/issues/2993
null
false
1,012,325,594
2,992
Fix f1 metric with None average
closed
[]
2021-09-30T15:31:57
2021-10-01T14:17:39
2021-10-01T14:17:38
Fix #2979.
albertvillanova
https://github.com/huggingface/datasets/pull/2992
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2992", "html_url": "https://github.com/huggingface/datasets/pull/2992", "diff_url": "https://github.com/huggingface/datasets/pull/2992.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2992.patch", "merged_at": "2021-10-01T14:17...
true
1,012,174,823
2,991
add docmentation for the `Unix style pattern` matching feature that can be leverage for `data_files` into `load_dataset`
open
[]
2021-09-30T13:22:01
2021-09-30T13:22:01
null
Unless I'm mistaken, it seems that in the new documentation it is no longer mentioned that you can use Unix style pattern matching in the `data_files` argument of the `load_dataset` method. This feature was mentioned [here](https://huggingface.co/docs/datasets/loading_datasets.html#from-a-community-dataset-on-the-h...
SaulLu
https://github.com/huggingface/datasets/issues/2991
null
false
1,012,097,418
2,990
Make Dataset.map accept list of np.array
closed
[]
2021-09-30T12:08:54
2021-10-01T13:57:46
2021-10-01T13:57:46
Fix #2987.
albertvillanova
https://github.com/huggingface/datasets/pull/2990
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2990", "html_url": "https://github.com/huggingface/datasets/pull/2990", "diff_url": "https://github.com/huggingface/datasets/pull/2990.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2990.patch", "merged_at": "2021-10-01T13:57...
true
1,011,220,375
2,989
Add CommonLanguage
closed
[]
2021-09-29T17:21:30
2021-10-01T17:36:39
2021-10-01T17:00:03
This PR adds the Common Language dataset (https://zenodo.org/record/5036977) The dataset is intended for language-identification speech classifiers and is already used by models on the Hub: * https://huggingface.co/speechbrain/lang-id-commonlanguage_ecapa * https://huggingface.co/anton-l/wav2vec2-base-langid cc @...
anton-l
https://github.com/huggingface/datasets/pull/2989
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2989", "html_url": "https://github.com/huggingface/datasets/pull/2989", "diff_url": "https://github.com/huggingface/datasets/pull/2989.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2989.patch", "merged_at": "2021-10-01T17:00...
true
1,011,148,017
2,988
IndexError: Invalid key: 14 is out of bounds for size 0
closed
[ "Hi ! Could you check the length of the `self.dataset` object (i.e. the Dataset object passed to the data loader) ? It looks like the dataset is empty.\r\nNot sure why the SWA optimizer would cause this though.", "Any updates on this? \r\nThe same error occurred to me too when running `cardiffnlp/twitter-roberta-...
2021-09-29T16:04:24
2022-04-10T14:49:49
2022-04-10T14:49:49
## Describe the bug A clear and concise description of what the bug is. Hi. I am trying to implement stochastic weighted averaging optimizer with transformer library as described here https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ , for this I am using a run_clm.py codes which is wor...
dorost1234
https://github.com/huggingface/datasets/issues/2988
null
false
1,011,026,141
2,987
ArrowInvalid: Can only convert 1-dimensional array values
closed
[ "Hi @NielsRogge, thanks for reporting!\r\n\r\nIn `datasets`, we were handling N-dimensional arrays only when passed as an instance of `np.array`, not when passed as a list of `np.array`s.\r\n\r\nI'm fixing it." ]
2021-09-29T14:18:52
2021-10-01T13:57:45
2021-10-01T13:57:45
## Describe the bug For the ViT and LayoutLMv2 demo notebooks in my [Transformers-Tutorials repo](https://github.com/NielsRogge/Transformers-Tutorials), people reported an ArrowInvalid issue after applying the following function to a Dataset: ``` def preprocess_data(examples): images = [Image.open(path).conve...
NielsRogge
https://github.com/huggingface/datasets/issues/2987
null
false
1,010,792,783
2,986
Refac module factory + avoid etag requests for hub datasets
closed
[ "> One thing is that I still don't know at what extent we want to keep backward compatibility for prepare_module. For now I just kept it (except I removed two parameters) just in case, but it's not used anywhere anymore.\r\n\r\nFYI, various other projects currently use it, thus clearly a major version would be requ...
2021-09-29T10:42:00
2021-10-11T11:05:53
2021-10-11T11:05:52
## Refactor the module factory When trying to extend the `data_files` logic to avoid doing unnecessary ETag requests, I noticed that the module preparation mechanism needed a refactor: - the function was 600 lines long - it was not readable - it contained many different cases that made it complex to maintain - i...
lhoestq
https://github.com/huggingface/datasets/pull/2986
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2986", "html_url": "https://github.com/huggingface/datasets/pull/2986", "diff_url": "https://github.com/huggingface/datasets/pull/2986.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2986.patch", "merged_at": "2021-10-11T11:05...
true
1,010,500,433
2,985
add new dataset kan_hope
closed
[]
2021-09-29T05:20:28
2021-10-01T16:55:19
2021-10-01T16:55:19
## Adding a Dataset - **Name:** *KanHope* - **Description:** *A code-mixed English-Kannada dataset for Hope speech detection* - **Task:** *Binary Text Classification* - **Paper:** *https://arxiv.org/abs/2108.04616* - **Data:** *https://github.com/adeepH/kan_hope/tree/main/dataset* - **Motivation:** *The dataset ...
adeepH
https://github.com/huggingface/datasets/pull/2985
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2985", "html_url": "https://github.com/huggingface/datasets/pull/2985", "diff_url": "https://github.com/huggingface/datasets/pull/2985.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2985.patch", "merged_at": "2021-10-01T16:55...
true
1,010,484,326
2,984
Exceeded maximum rows when reading large files
closed
[ "Hi @zijwang, thanks for reporting this issue.\r\n\r\nYou did not mention which `datasets` version you are using, but looking at the code in the stack trace, it seems you are using an old version.\r\n\r\nCould you please update `datasets` (`pip install -U datasets`) and check if the problem persists?" ]
2021-09-29T04:49:22
2021-10-12T06:05:42
2021-10-12T06:05:42
## Describe the bug A clear and concise description of what the bug is. When using `load_dataset` with json files, if the files are too large, there will be "Exceeded maximum rows" error. ## Steps to reproduce the bug ```python dataset = load_dataset('json', data_files=data_files) # data files have 3M rows in a ...
zijwang
https://github.com/huggingface/datasets/issues/2984
null
false
1,010,263,058
2,983
added SwissJudgmentPrediction dataset
closed
[]
2021-09-28T22:17:56
2021-10-01T16:03:05
2021-10-01T16:03:05
null
JoelNiklaus
https://github.com/huggingface/datasets/pull/2983
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2983", "html_url": "https://github.com/huggingface/datasets/pull/2983", "diff_url": "https://github.com/huggingface/datasets/pull/2983.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2983.patch", "merged_at": "2021-10-01T16:03...
true
1,010,118,418
2,982
Add the Math Aptitude Test of Heuristics dataset.
closed
[]
2021-09-28T19:18:37
2021-10-01T19:51:23
2021-10-01T12:21:00
null
hacobe
https://github.com/huggingface/datasets/pull/2982
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2982", "html_url": "https://github.com/huggingface/datasets/pull/2982", "diff_url": "https://github.com/huggingface/datasets/pull/2982.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2982.patch", "merged_at": "2021-10-01T12:21...
true
1,009,969,310
2,981
add wit dataset
closed
[ "Opening this up - Here's whats left to make it really shine:\r\n- [ ] Update dataset card (this is being worked on by the author)\r\n- [ ] Add `dataset_info.json`. Requires downloading the entire dataset. I believe @lhoestq mentioned he may have a machine he's using for this sort of thing.\r\n\r\nI think both of t...
2021-09-28T16:34:49
2022-05-05T14:26:41
2022-05-05T14:26:41
Resolves #2902 based on conversation there - would also close #2810. Open to suggestions/help 😀 CC @hassiahk @lhoestq @yjernite
nateraw
https://github.com/huggingface/datasets/pull/2981
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2981", "html_url": "https://github.com/huggingface/datasets/pull/2981", "diff_url": "https://github.com/huggingface/datasets/pull/2981.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2981.patch", "merged_at": null }
true
1,009,873,482
2,980
OpenSLR 25: ASR data for Amharic, Swahili and Wolof
open
[ "Whoever handles this just needs to: \r\n\r\n- [ ] fork the HuggingFace Datasets repo\r\n- [ ] update the [existing dataset script](https://github.com/huggingface/datasets/blob/master/datasets/openslr/openslr.py) to add SLR25. Lots of copypasting from other sections of the script should make that easy. \r\nAmharic ...
2021-09-28T15:04:36
2021-09-29T17:25:14
null
## Adding a Dataset - **Name:** *SLR25* - **Description:** *Subset 25 from OpenSLR. Other subsets have been added to https://huggingface.co/datasets/openslr, 25 covers Amharic, Swahili and Wolof data* - **Paper:** *https://www.openslr.org/25/ has citations for each of the three subsubsets. * - **Data:** *Currently ...
cdleong
https://github.com/huggingface/datasets/issues/2980
null
false
1,009,634,147
2,979
ValueError when computing f1 metric with average None
closed
[ "Hi @asofiaoliveira, thanks for reporting.\r\n\r\nI'm fixing it." ]
2021-09-28T11:34:53
2021-10-01T14:17:38
2021-10-01T14:17:38
## Describe the bug When I try to compute the f1 score for each class in a multiclass classification problem, I get a ValueError. The same happens with recall and precision. I traced the error to the `.item()` in these scripts, which is probably there for the other averages. E.g. from f1.py: ```python return { ...
asofiaoliveira
https://github.com/huggingface/datasets/issues/2979
null
false
1,009,521,419
2,978
Run CI tests against non-production server
open
[ "Hey @albertvillanova could you provide more context, including extracts from the discussion we had ?\r\n\r\nLet's ping @Pierrci @julien-c and @n1t0 for their opinion about that", "@julien-c increased the huggingface.co production workers in order to see if it solve [the 502 you had this morning](https://app.circ...
2021-09-28T09:41:26
2021-09-28T15:23:50
null
Currently, the CI test suite performs requests to the HF production server. As discussed with @elishowk, we should refactor our tests to use the HF staging server instead, like `huggingface_hub` and `transformers`.
albertvillanova
https://github.com/huggingface/datasets/issues/2978
null
false
1,009,378,692
2,977
Impossible to load compressed csv
closed
[ "Hi @Valahaar, thanks for reporting and for your investigation about the source cause.\r\n\r\nYou are right and that commit prevents `pandas` from inferring the compression. On the other hand, @lhoestq did that change to support loading that dataset in streaming mode. \r\n\r\nI'm fixing it." ]
2021-09-28T07:18:54
2021-10-01T15:53:16
2021-10-01T15:53:15
## Describe the bug It is not possible to load from a compressed csv anymore. ## Steps to reproduce the bug ```python load_dataset('csv', data_files=['/path/to/csv.bz2']) ``` ## Problem and possible solution This used to work, but the commit that broke it is [this one](https://github.com/huggingface/datasets...
Valahaar
https://github.com/huggingface/datasets/issues/2977
null
false
1,008,647,889
2,976
Can't load dataset
closed
[ "Hi @mskovalova, \r\n\r\nSome datasets have multiple configurations. Therefore, in order to load them, you have to specify both the *dataset name* and the *configuration name*.\r\n\r\nIn the error message you got, you have a usage example:\r\n- To load the 'wikitext-103-raw-v1' configuration of the 'wikitext' datas...
2021-09-27T21:38:14
2024-04-08T03:27:29
2021-09-28T06:53:01
I'm trying to load a wikitext dataset ``` from datasets import load_dataset raw_datasets = load_dataset("wikitext") ``` ValueError: Config name is missing. Please pick one among the available configs: ['wikitext-103-raw-v1', 'wikitext-2-raw-v1', 'wikitext-103-v1', 'wikitext-2-v1'] Example of usage: `load_d...
mskovalova
https://github.com/huggingface/datasets/issues/2976
null
false
1,008,444,654
2,975
ignore dummy folder and dataset_infos.json
closed
[]
2021-09-27T18:09:03
2021-09-29T09:45:38
2021-09-29T09:05:38
Fixes #2877 Added the `dataset_infos.json` to the ignored files list and also added check to ignore files which have parent directory as `dummy`. Let me know if it is correct. Thanks :)
Ishan-Kumar2
https://github.com/huggingface/datasets/pull/2975
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2975", "html_url": "https://github.com/huggingface/datasets/pull/2975", "diff_url": "https://github.com/huggingface/datasets/pull/2975.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2975.patch", "merged_at": "2021-09-29T09:05...
true
1,008,247,787
2,974
Actually disable dummy labels by default
closed
[]
2021-09-27T14:50:20
2021-09-29T09:04:42
2021-09-29T09:04:41
So I might have just changed the docstring instead of the actual default argument value and not realized. @lhoestq I'm sorry >.>
Rocketknight1
https://github.com/huggingface/datasets/pull/2974
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2974", "html_url": "https://github.com/huggingface/datasets/pull/2974", "diff_url": "https://github.com/huggingface/datasets/pull/2974.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2974.patch", "merged_at": "2021-09-29T09:04...
true
1,007,894,592
2,973
Fix JSON metadata of masakhaner dataset
closed
[]
2021-09-27T09:09:08
2021-09-27T12:59:59
2021-09-27T12:59:59
Fix #2971.
albertvillanova
https://github.com/huggingface/datasets/pull/2973
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2973", "html_url": "https://github.com/huggingface/datasets/pull/2973", "diff_url": "https://github.com/huggingface/datasets/pull/2973.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2973.patch", "merged_at": "2021-09-27T12:59...
true
1,007,808,714
2,972
OSError: Not enough disk space.
closed
[ "Maybe we can change the disk space calculating API from `shutil.disk_usage` to `os.statvfs` in UNIX-like system, which can provide correct results.\r\n```\r\nstatvfs = os.statvfs('path')\r\navail_space_bytes = statvfs.f_frsize * statvfs.f_bavail\r\n```", "Hi @qqaatw, thanks for reporting.\r\n\r\nCould you pleas...
2021-09-27T07:41:22
2024-12-04T02:56:19
2021-09-28T06:43:15
## Describe the bug I'm trying to download `natural_questions` dataset from the Internet, and I've specified the cache_dir which locates in a mounted disk and has enough disk space. However, even though the space is enough, the disk space checking function still reports the space of root `/` disk having no enough spac...
qqaatw
https://github.com/huggingface/datasets/issues/2972
null
false
1,007,696,522
2,971
masakhaner dataset load problem
closed
[ "Thanks for reporting, @ontocord. We are fixing the wrong metadata." ]
2021-09-27T04:59:07
2021-09-27T12:59:59
2021-09-27T12:59:59
## Describe the bug Masakhaner dataset is not loading ## Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("masakhaner",'amh') ``` ## Expected results Expected the return of a dataset ## Actual results ``` NonMatchingSplitsSizesError Traceback (mo...
huu4ontocord
https://github.com/huggingface/datasets/issues/2971
null
false
1,007,340,089
2,970
Magnet’s
closed
[]
2021-09-26T09:50:29
2021-09-26T10:38:59
2021-09-26T10:38:59
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
rcacho172
https://github.com/huggingface/datasets/issues/2970
null
false