id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
βŒ€
body
stringlengths
0
228k
βŒ€
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
1,053,689,140
3,274
Fix some contact information formats
closed
[ "The CI fail are caused by some missing sections or tags, which is unrelated to this PR. Merging !" ]
2021-11-15T13:50:34
2021-11-15T14:43:55
2021-11-15T14:43:54
As reported in https://github.com/huggingface/datasets/issues/3188 some contact information are not displayed correctly. This PR fixes this for CoNLL-2002 and some other datasets with the same issue
lhoestq
https://github.com/huggingface/datasets/pull/3274
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3274", "html_url": "https://github.com/huggingface/datasets/pull/3274", "diff_url": "https://github.com/huggingface/datasets/pull/3274.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3274.patch", "merged_at": "2021-11-15T14:43...
true
1,053,554,038
3,273
Respect row ordering when concatenating datasets along axis=1
closed
[]
2021-11-15T11:27:14
2021-11-17T15:41:11
2021-11-17T15:41:11
Currently, there is a bug when concatenating datasets along `axis=1` if more than one dataset has the `_indices` attribute defined. In that scenario, all indices mappings except the first one get ignored. A minimal reproducible example: ```python >>> from datasets import Dataset, concatenate_datasets >>> a = Data...
mariosasko
https://github.com/huggingface/datasets/issues/3273
null
false
1,053,516,479
3,272
Make iter_archive work with ZIP files
open
[ "Hello, is this issue open for any contributor ? can I work on it ?\r\n\r\n", "Hi ! Sure this is open for any contributor. If you're interested feel free to self-assign this issue to you by commenting `#self-assign`. Then if you have any question or if I can help, feel free to ping me.\r\n\r\nTo begin with, feel ...
2021-11-15T10:50:42
2021-11-25T00:08:47
null
Currently users can use `dl_manager.iter_archive` in their dataset script to iterate over all the files of a TAR archive. It would be nice if it could work with ZIP files too !
lhoestq
https://github.com/huggingface/datasets/issues/3272
null
false
1,053,482,919
3,271
Decode audio from remote
closed
[]
2021-11-15T10:25:56
2021-11-16T11:35:58
2021-11-16T11:35:58
Currently the Audio feature type can only decode local audio files, not remote files. To fix this I replaced `open` with our `xopen` functoin that is compatible with remote files in audio.py cc @albertvillanova @mariosasko
lhoestq
https://github.com/huggingface/datasets/pull/3271
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3271", "html_url": "https://github.com/huggingface/datasets/pull/3271", "diff_url": "https://github.com/huggingface/datasets/pull/3271.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3271.patch", "merged_at": "2021-11-16T11:35...
true
1,053,465,662
3,270
Add os.listdir for streaming
closed
[]
2021-11-15T10:14:04
2021-11-15T10:27:03
2021-11-15T10:27:03
Extend `os.listdir` to support streaming data from remote files. This is often used to navigate in remote ZIP files for example
lhoestq
https://github.com/huggingface/datasets/pull/3270
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3270", "html_url": "https://github.com/huggingface/datasets/pull/3270", "diff_url": "https://github.com/huggingface/datasets/pull/3270.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3270.patch", "merged_at": "2021-11-15T10:27...
true
1,053,218,769
3,269
coqa NonMatchingChecksumError
closed
[ "Hi @ZhaofengWu, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your bug:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"coqa\")\r\nDownloading: 3.82kB [00:00, 1.91MB/s]\r\nDownloading: 1.79kB [00:00, 1.79MB/s]\r\nUsing custom data configuration d...
2021-11-15T05:04:07
2022-01-19T13:58:19
2022-01-19T13:58:19
``` >>> from datasets import load_dataset >>> dataset = load_dataset("coqa") Downloading: 3.82kB [00:00, 1.26MB/s] ...
ZhaofengWu
https://github.com/huggingface/datasets/issues/3269
null
false
1,052,992,681
3,268
Dataset viewer issue for 'liweili/c4_200m'
closed
[ "Hi ! I think the issue comes from this [line](https://huggingface.co/datasets/liweili/c4_200m/blob/main/c4_200m.py#L87):\r\n```python\r\npath = filepath + \"/*.tsv*\"\r\n```\r\n\r\nYou can fix this by doing this instead:\r\n```python\r\npath = os.path.join(filepath, \"/*.tsv*\")\r\n```\r\n\r\nHere is why:\r\n\r\nL...
2021-11-14T17:18:46
2021-12-21T10:25:20
2021-12-21T10:24:51
## Dataset viewer issue for '*liweili/c4_200m*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/liweili/c4_200m)* *Server Error* ``` Status code: 404 Exception: Status404Error Message: Not found. Maybe the cache is missing, or maybe the ressource does not exist. ``` ...
liliwei25
https://github.com/huggingface/datasets/issues/3268
null
false
1,052,750,084
3,267
Replacing .format() and % by f-strings
closed
[ "Hi ! It looks like most of your changes are just `black` changes. All those changes are not necessary. In particular if you want to use `black`, please use the `make style` command instead. It runs `black` with additional parameters and you shouldn't end up with that many changes\r\n\r\nFeel free to open a new PR ...
2021-11-13T19:12:02
2021-11-16T21:00:26
2021-11-16T14:55:43
**Fix #3257** Replaced _.format()_ and _%_ by f-strings in the following modules : - [x] **tests** - [x] **metrics** - [x] **benchmarks** - [x] **utils** - [x] **templates** Will follow in the next PR the modules left : - [ ] **src** Module **datasets** will not be edited as asked by @mariosasko PS...
Mehdi2402
https://github.com/huggingface/datasets/pull/3267
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3267", "html_url": "https://github.com/huggingface/datasets/pull/3267", "diff_url": "https://github.com/huggingface/datasets/pull/3267.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3267.patch", "merged_at": null }
true
1,052,700,155
3,266
Fix URLs for WikiAuto Manual, jeopardy and definite_pronoun_resolution
closed
[ "There seems to be problems with datasets metadata, of which I dont have access to. I think one of the datasets is from reddit. Can anyone help?", "Hello @LashaO , I think the errors were caused by `_DATA_FILES` in `definite_pronoun_resolution.py`. Here are details of the test error.\r\n```\r\nself = BuilderConfi...
2021-11-13T15:01:34
2021-12-06T11:16:31
2021-12-06T11:16:31
[#3264](https://github.com/huggingface/datasets/issues/3264)
LashaO
https://github.com/huggingface/datasets/pull/3266
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3266", "html_url": "https://github.com/huggingface/datasets/pull/3266", "diff_url": "https://github.com/huggingface/datasets/pull/3266.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3266.patch", "merged_at": "2021-12-06T11:16...
true
1,052,666,558
3,265
Checksum error for kilt_task_wow
closed
[ "Using `dataset = load_dataset(\"kilt_tasks\", \"wow\", ignore_verifications=True)` may fix it, but I do not think it is a elegant solution.", "Hi @slyviacassell, thanks for reporting.\r\n\r\nYes, there is an issue with the checksum verification. I'm fixing it.\r\n\r\nAnd as you pointed out, in the meantime, you ...
2021-11-13T12:04:17
2021-11-16T11:23:53
2021-11-16T11:21:58
## Describe the bug Checksum failed when downloads kilt_tasks_wow. See error output for details. ## Steps to reproduce the bug ```python import datasets datasets.load_datasets('kilt_tasks','wow') ``` ## Expected results Download successful ## Actual results ``` Downloading and preparing dataset kilt_ta...
slyviacassell
https://github.com/huggingface/datasets/issues/3265
null
false
1,052,663,513
3,264
Downloading URL change for WikiAuto Manual, jeopardy and definite_pronoun_resolution
closed
[ "#take\r\nI am willing to fix this. Links can be replaced for WikiAuto Manual and jeopardy with new ones provided by authors.\r\n\r\nAs for the definite_pronoun_resolution URL, a certificate error seems to be preventing a download. I have the files on my local machine. I can include them in the dataset folder as th...
2021-11-13T11:47:12
2022-06-01T17:38:16
2022-06-01T17:38:16
## Describe the bug - WikiAuto Manual The original manual datasets with the following downloading URL in this [repository](https://github.com/chaojiang06/wiki-auto) was [deleted](https://github.com/chaojiang06/wiki-auto/commit/0af9b066f2b4e02726fb8a9be49283c0ad25367f) by the author. ``` https://github.com/chaoj...
slyviacassell
https://github.com/huggingface/datasets/issues/3264
null
false
1,052,552,516
3,263
FET DATA
closed
[]
2021-11-13T05:46:06
2021-11-13T13:31:47
2021-11-13T13:31:47
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
FStell01
https://github.com/huggingface/datasets/issues/3263
null
false
1,052,455,082
3,262
asserts replaced with exception for image classification task, csv, json
closed
[]
2021-11-12T22:34:59
2021-11-15T11:08:37
2021-11-15T11:08:37
Fixes for csv, json in io module and image_classification task with tests referenced in https://github.com/huggingface/datasets/issues/3171
manisnesan
https://github.com/huggingface/datasets/pull/3262
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3262", "html_url": "https://github.com/huggingface/datasets/pull/3262", "diff_url": "https://github.com/huggingface/datasets/pull/3262.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3262.patch", "merged_at": "2021-11-15T11:08...
true
1,052,346,381
3,261
Scifi_TV_Shows: Having trouble getting viewer to find appropriate files
closed
[ "Hi ! I think this is because `iter_archive` doesn't support ZIP files yet. See https://github.com/huggingface/datasets/issues/3272\r\n\r\nYou can navigate into the archive this way instead:\r\n```python\r\n# in split_generators\r\ndata_dir = dl_manager.download_and_extract(url)\r\ntrain_filepath = os.path.join(dat...
2021-11-12T19:25:19
2021-12-21T10:24:10
2021-12-21T10:24:10
## Dataset viewer issue for '*Science Fiction TV Show Plots Corpus (Scifi_TV_Shows)*' **Link:** [link](https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows) I tried adding both a script (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/blob/main/Scifi_TV_Shows.py) and some dummy examples (https:/...
lara-martin
https://github.com/huggingface/datasets/issues/3261
null
false
1,052,247,373
3,260
Fix ConnectionError in Scielo dataset
closed
[ "The CI error is unrelated to the change." ]
2021-11-12T18:02:37
2021-11-16T18:18:17
2021-11-16T17:55:22
This PR: * allows 403 status code in HEAD requests to S3 buckets to fix the connection error in the Scielo dataset (instead of `url`, uses `response.url` to check the URL of the final endpoint) * makes the Scielo dataset streamable Fixes #3255.
mariosasko
https://github.com/huggingface/datasets/pull/3260
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3260", "html_url": "https://github.com/huggingface/datasets/pull/3260", "diff_url": "https://github.com/huggingface/datasets/pull/3260.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3260.patch", "merged_at": "2021-11-16T17:55...
true
1,052,189,775
3,259
Updating details of IRC disentanglement data
closed
[ "Thank you for the cleanup!" ]
2021-11-12T17:16:58
2021-11-18T17:19:33
2021-11-18T17:19:33
I was pleasantly surprised to find that someone had already added my dataset to the huggingface library, but some details were missing or incorrect. This PR fixes the documentation.
jkkummerfeld
https://github.com/huggingface/datasets/pull/3259
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3259", "html_url": "https://github.com/huggingface/datasets/pull/3259", "diff_url": "https://github.com/huggingface/datasets/pull/3259.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3259.patch", "merged_at": "2021-11-18T17:19...
true
1,052,188,195
3,258
Reload dataset that was already downloaded with `load_from_disk` from cloud storage
open
[]
2021-11-12T17:14:59
2021-11-12T17:14:59
null
`load_from_disk` downloads the dataset to a temporary directory without checking if the dataset has already been downloaded once. It would be nice to have some sort of caching for datasets downloaded this way. This could leverage the fingerprint of the dataset that was saved in the `state.json` file.
lhoestq
https://github.com/huggingface/datasets/issues/3258
null
false
1,052,118,365
3,257
Use f-strings for string formatting
closed
[ "Hi, I would be glad to help with this. Is there anyone else working on it?", "Hi, I would be glad to work on this too.", "#self-assign", "Hi @Carlosbogo,\r\n\r\nwould you be interested in replacing the `.format` and `%` syntax with f-strings in the modules in the `datasets` directory since @Mehdi2402 has ope...
2021-11-12T16:02:15
2021-11-17T16:18:38
2021-11-17T16:18:38
f-strings offer better readability/performance than `str.format` and `%`, so we should use them in all places in our codebase unless there is good reason to keep the older syntax. > **NOTE FOR CONTRIBUTORS**: To avoid large PRs and possible merge conflicts, do 1-3 modules per PR. Also, feel free to ignore the files ...
mariosasko
https://github.com/huggingface/datasets/issues/3257
null
false
1,052,000,613
3,256
asserts replaced by exception for text classification task with test.
closed
[ "Haha it looks like you got the chance of being reviewed twice at the same time and got the same suggestion twice x)\r\nAnyway it's all good now so we can merge !", "Thanks for the feedback. " ]
2021-11-12T14:05:36
2021-11-12T15:09:33
2021-11-12T14:59:32
I have replaced only a single assert in text_classification.py along with a unit test to verify an exception is raised based on https://github.com/huggingface/datasets/issues/3171 . I would like to first understand the code contribution workflow. So keeping the change to a single file rather than making too many ch...
manisnesan
https://github.com/huggingface/datasets/pull/3256
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3256", "html_url": "https://github.com/huggingface/datasets/pull/3256", "diff_url": "https://github.com/huggingface/datasets/pull/3256.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3256.patch", "merged_at": "2021-11-12T14:59...
true
1,051,783,129
3,255
SciELO dataset ConnectionError
closed
[]
2021-11-12T09:57:14
2021-11-16T17:55:22
2021-11-16T17:55:22
## Describe the bug I get `ConnectionError` when I am trying to load the SciELO dataset. When I try the URL with `requests` I get: ``` >>> requests.head("https://ndownloader.figstatic.com/files/14019287") <Response [302]> ``` And as far as I understand redirections in `datasets` are not supported for downlo...
WojciechKusa
https://github.com/huggingface/datasets/issues/3255
null
false
1,051,351,172
3,254
Update xcopa dataset (fix checksum issues + add translated data)
closed
[ "The CI failures are unrelated to the changes (missing fields in the readme and the CER metric error fixed in #3252)." ]
2021-11-11T20:51:33
2021-11-12T10:30:58
2021-11-12T10:30:57
This PR updates the checksums (as reported [here](https://discuss.huggingface.co/t/how-to-load-dataset-locally/11601/2)) of the `xcopa` dataset. Additionally, it adds new configs that hold the translated data of the original set of configs. This data was not available at the time of adding this dataset to the lib.
mariosasko
https://github.com/huggingface/datasets/pull/3254
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3254", "html_url": "https://github.com/huggingface/datasets/pull/3254", "diff_url": "https://github.com/huggingface/datasets/pull/3254.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3254.patch", "merged_at": "2021-11-12T10:30...
true
1,051,308,972
3,253
`GeneratorBasedBuilder` does not support `None` values
closed
[ "Hi,\r\n\r\nthanks for reporting and providing a minimal reproducible example. \r\n\r\nThis line of the PR I've linked in our discussion on the Forum will add support for `None` values:\r\nhttps://github.com/huggingface/datasets/blob/a53de01842aac65c66a49b2439e18fa93ff73ceb/src/datasets/features/features.py#L835\r\...
2021-11-11T19:51:21
2021-12-09T14:26:58
2021-12-09T14:26:58
## Describe the bug `GeneratorBasedBuilder` does not support `None` values. ## Steps to reproduce the bug See [this repository](https://github.com/pavel-lexyr/huggingface-datasets-bug-reproduction) for minimal reproduction. ## Expected results Dataset is initialized with a `None` value in the `value` column. ...
pavel-lexyr
https://github.com/huggingface/datasets/issues/3253
null
false
1,051,124,749
3,252
Fix failing CER metric test in CI after update
closed
[]
2021-11-11T15:57:16
2021-11-12T14:06:44
2021-11-12T14:06:43
Fixes the [failing CER metric test](https://app.circleci.com/pipelines/github/huggingface/datasets/8644/workflows/79816553-fa2f-4756-b022-d5937f00bf7b/jobs/53298) in CI by adding support for `jiwer==2.3.0`, which was released yesterday. Also, I verified that all the tests in `metrics/cer/test_cer.py` pass after the cha...
mariosasko
https://github.com/huggingface/datasets/pull/3252
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3252", "html_url": "https://github.com/huggingface/datasets/pull/3252", "diff_url": "https://github.com/huggingface/datasets/pull/3252.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3252.patch", "merged_at": "2021-11-12T14:06...
true
1,050,541,348
3,250
Add ETHICS dataset
closed
[ "Thanks for your contribution, @ssss1029. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if ...
2021-11-11T03:45:34
2022-10-03T09:37:25
2022-10-03T09:37:25
This PR adds the ETHICS dataset, including all 5 sub-datasets. From https://arxiv.org/abs/2008.02275
ssss1029
https://github.com/huggingface/datasets/pull/3250
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3250", "html_url": "https://github.com/huggingface/datasets/pull/3250", "diff_url": "https://github.com/huggingface/datasets/pull/3250.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3250.patch", "merged_at": null }
true
1,050,193,138
3,249
Fix streaming for id_newspapers_2018
closed
[]
2021-11-10T18:55:30
2021-11-12T14:01:32
2021-11-12T14:01:31
To be compatible with streaming, this dataset must use `dl_manager.iter_archive` since the data are in a .tgz file
lhoestq
https://github.com/huggingface/datasets/pull/3249
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3249", "html_url": "https://github.com/huggingface/datasets/pull/3249", "diff_url": "https://github.com/huggingface/datasets/pull/3249.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3249.patch", "merged_at": "2021-11-12T14:01...
true
1,050,171,082
3,248
Stream from Google Drive and other hosts
closed
[ "I just tried some datasets and noticed that `spider` is not working for some reason (the compression type is not recognized), resulting in FileNotFoundError. I can take a look tomorrow", "I'm fixing the remaining files based on TAR archives", "THANKS A LOT" ]
2021-11-10T18:32:32
2021-11-30T16:03:43
2021-11-12T17:18:11
Streaming from Google Drive is a bit more challenging than the other host we've been supporting: - the download URL must be updated to add the confirm token obtained by HEAD request - it requires to use cookies to keep the connection alive - the URL doesn't tell any information about whether the file is compressed o...
lhoestq
https://github.com/huggingface/datasets/pull/3248
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3248", "html_url": "https://github.com/huggingface/datasets/pull/3248", "diff_url": "https://github.com/huggingface/datasets/pull/3248.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3248.patch", "merged_at": "2021-11-12T17:18...
true
1,049,699,088
3,247
Loading big json dataset raises pyarrow.lib.ArrowNotImplementedError
closed
[ "Hi,\r\n\r\nthis issue is similar to https://github.com/huggingface/datasets/issues/3093, so you can either use the solution provided there or try to load the data in one chunk (you can control the chunk size by specifying the `chunksize` parameter (`int`) in `load_dataset`).\r\n\r\n@lhoestq Is this worth opening a...
2021-11-10T11:17:59
2022-04-10T14:05:57
2022-04-10T14:05:57
## Describe the bug When trying to create a dataset from a json file with around 25MB, the following error is raised `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct` Splitting the big file into smaller ones and then loading it with the `lo...
maxzirps
https://github.com/huggingface/datasets/issues/3247
null
false
1,049,662,746
3,246
[tiny] fix typo in stream docs
closed
[]
2021-11-10T10:40:02
2021-11-10T11:10:39
2021-11-10T11:10:39
null
verbiiyo
https://github.com/huggingface/datasets/pull/3246
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3246", "html_url": "https://github.com/huggingface/datasets/pull/3246", "diff_url": "https://github.com/huggingface/datasets/pull/3246.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3246.patch", "merged_at": "2021-11-10T11:10...
true
1,048,726,062
3,245
Fix load_from_disk temporary directory
closed
[]
2021-11-09T15:15:15
2021-11-09T15:30:52
2021-11-09T15:30:51
`load_from_disk` uses `tempfile.TemporaryDirectory()` instead of our `get_temporary_cache_files_directory()` function. This can cause the temporary directory to be deleted before the dataset object is garbage collected. In practice, it prevents anyone from using methods like `shuffle` on a dataset loaded this way, b...
lhoestq
https://github.com/huggingface/datasets/pull/3245
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3245", "html_url": "https://github.com/huggingface/datasets/pull/3245", "diff_url": "https://github.com/huggingface/datasets/pull/3245.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3245.patch", "merged_at": "2021-11-09T15:30...
true
1,048,675,741
3,244
Fix filter method for batched=True
closed
[]
2021-11-09T14:30:59
2021-11-09T15:52:58
2021-11-09T15:52:57
null
thomasw21
https://github.com/huggingface/datasets/pull/3244
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3244", "html_url": "https://github.com/huggingface/datasets/pull/3244", "diff_url": "https://github.com/huggingface/datasets/pull/3244.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3244.patch", "merged_at": "2021-11-09T15:52...
true
1,048,630,754
3,243
Remove redundant isort module placement
closed
[]
2021-11-09T13:50:30
2021-11-12T14:02:45
2021-11-12T14:02:45
`isort` can place modules by itself from [version 5.0.0](https://pycqa.github.io/isort/docs/upgrade_guides/5.0.0.html#module-placement-changes-known_third_party-known_first_party-default_section-etc) onwards, making the `known_first_party` and `known_third_party` fields in `setup.cfg` redundant (this is why our CI work...
mariosasko
https://github.com/huggingface/datasets/pull/3243
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3243", "html_url": "https://github.com/huggingface/datasets/pull/3243", "diff_url": "https://github.com/huggingface/datasets/pull/3243.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3243.patch", "merged_at": "2021-11-12T14:02...
true
1,048,527,232
3,242
Adding ANERcorp-CAMeLLab dataset
open
[ "Adding ANERcorp dataset\r\n\r\n## Adding a Dataset\r\n- **Name:** *ANERcorp-CAMeLLab*\r\n- **Description:** *Since its creation in 2008, the ANERcorp dataset (Benajiba & Rosso, 2008) has been a standard reference used by Arabic named entity recognition researchers around the world. However, over time, this dataset...
2021-11-09T12:04:04
2021-11-09T12:41:15
null
null
vitalyshalumov
https://github.com/huggingface/datasets/issues/3242
null
false
1,048,461,852
3,241
Swap descriptions of v1 and raw-v1 configs of WikiText dataset and fix metadata
closed
[]
2021-11-09T10:54:15
2022-02-14T15:46:00
2021-11-09T13:49:28
Fix #3237, fix #795.
albertvillanova
https://github.com/huggingface/datasets/pull/3241
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3241", "html_url": "https://github.com/huggingface/datasets/pull/3241", "diff_url": "https://github.com/huggingface/datasets/pull/3241.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3241.patch", "merged_at": "2021-11-09T13:49...
true
1,048,376,021
3,240
Couldn't reach data file for disaster_response_messages
closed
[ "It looks like the dataset isn't available anymore on appen.com\r\n\r\nThe CSV files appear to still be available at https://www.kaggle.com/landlord/multilingual-disaster-response-messages though. It says that the data are under the CC0 license so I guess we can host the dataset elsewhere instead ?" ]
2021-11-09T09:26:42
2021-12-14T14:38:29
2021-12-14T14:38:29
## Describe the bug Following command gives an ConnectionError. ## Steps to reproduce the bug ```python disaster = load_dataset('disaster_response_messages') ``` ## Error ``` ConnectionError: Couldn't reach https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training....
pandya6988
https://github.com/huggingface/datasets/issues/3240
null
false
1,048,360,232
3,239
Inconsistent performance of the "arabic_billion_words" dataset
open
[]
2021-11-09T09:11:00
2021-11-09T09:11:00
null
## Describe the bug When downloaded from macine 1 the dataset is downloaded and parsed correctly. When downloaded from machine two (which has a different cache directory), the following script: import datasets from datasets import load_dataset raw_dataset_elkhair_1 = load_dataset('arabic_billion_words', 'Alitti...
vitalyshalumov
https://github.com/huggingface/datasets/issues/3239
null
false
1,048,226,086
3,238
Reuters21578 Couldn't reach
closed
[ "Hi ! The URL works fine on my side today, could you try again ?", "thank you @lhoestq \r\nit works" ]
2021-11-09T06:08:56
2021-11-11T00:02:57
2021-11-11T00:02:57
``## Adding a Dataset - **Name:** *Reuters21578* - **Description:** *ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz* - **Data:** *https://huggingface.co/datasets/reuters21578* `from datasets import load_dataset` `dataset = load_dataset("reuters21578", 'ModLewis...
TingNLP
https://github.com/huggingface/datasets/issues/3238
null
false
1,048,165,525
3,237
wikitext description wrong
closed
[ "Hi @hongyuanmei, thanks for reporting.\r\n\r\nI'm fixing it.", "Duplicate of:\r\n- #795" ]
2021-11-09T04:06:52
2022-02-14T15:45:11
2021-11-09T13:49:28
## Describe the bug Descriptions of the wikitext datasests are wrong. ## Steps to reproduce the bug Please see: https://github.com/huggingface/datasets/blob/f6dcafce996f39b6a4bbe3a9833287346f4a4b68/datasets/wikitext/wikitext.py#L50 ## Expected results The descriptions for raw-v1 and v1 should be switched.
hongyuanmei
https://github.com/huggingface/datasets/issues/3237
null
false
1,048,026,358
3,236
Loading of datasets changed in #3110 returns no examples
closed
[ "Hi @eladsegal, thanks for reporting.\r\n\r\nI am sorry, but I can't reproduce the bug:\r\n```\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"qasper\")\r\nDownloading: 5.11kB [00:00, ?B/s]\r\nDownloading and preparing dataset qasper/qasper (download: 9.88 MiB, generated: 35.11 MiB, ...
2021-11-08T23:29:46
2021-11-09T16:46:05
2021-11-09T16:45:47
## Describe the bug Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples: ```python DatasetDict({ train: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 0 }) validation: Dataset({ features: ['id',...
eladsegal
https://github.com/huggingface/datasets/issues/3236
null
false
1,047,808,263
3,235
Addd options to use updated bleurt checkpoints
closed
[]
2021-11-08T18:53:54
2021-11-12T14:05:28
2021-11-12T14:05:28
Adds options to use newer recommended checkpoint (as of 2021/10/8) bleurt-20 and its distilled versions. Updated checkpoints are described in https://github.com/google-research/bleurt/blob/master/checkpoints.md#the-recommended-checkpoint-bleurt-20 This change won't affect the default behavior of metrics/bleurt. ...
jaehlee
https://github.com/huggingface/datasets/pull/3235
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3235", "html_url": "https://github.com/huggingface/datasets/pull/3235", "diff_url": "https://github.com/huggingface/datasets/pull/3235.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3235.patch", "merged_at": "2021-11-12T14:05...
true
1,047,634,236
3,234
Avoid PyArrow type optimization if it fails
closed
[ "That's good to have a way to disable this easily :)\r\nI just find it a bit unfortunate that users would have to experience the error once and then do `DISABLE_PYARROW_TYPES_OPTIMIZATION=1`. Do you know if there's a way to simply fallback on disabling it automatically when it fails ?", "@lhoestq Actually, I agre...
2021-11-08T16:10:27
2021-11-10T12:04:29
2021-11-10T12:04:28
Adds a new variable, `DISABLE_PYARROW_TYPES_OPTIMIZATION`, to `config.py` for easier control of the Arrow type optimization. Fix #2206
mariosasko
https://github.com/huggingface/datasets/pull/3234
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3234", "html_url": "https://github.com/huggingface/datasets/pull/3234", "diff_url": "https://github.com/huggingface/datasets/pull/3234.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3234.patch", "merged_at": "2021-11-10T12:04...
true
1,047,474,931
3,233
Improve repository structure docs
closed
[]
2021-11-08T13:51:35
2021-11-09T10:02:18
2021-11-09T10:02:17
Continuation of the documentation started in https://github.com/huggingface/datasets/pull/3221, taking into account @stevhliu 's comments
lhoestq
https://github.com/huggingface/datasets/pull/3233
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3233", "html_url": "https://github.com/huggingface/datasets/pull/3233", "diff_url": "https://github.com/huggingface/datasets/pull/3233.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3233.patch", "merged_at": "2021-11-09T10:02...
true
1,047,361,573
3,232
The Xsum datasets seems not able to download.
closed
[ "Hi ! On my side the URL is working fine, could you try again ?", "> Hi ! On my side the URL is working fine, could you try again ?\r\n\r\nI try it again and cannot download the file (might because of my location). Could you please provide another download link(such as google drive)? :>", "I don't know other ...
2021-11-08T11:58:54
2021-11-09T15:07:16
2021-11-09T15:07:16
## Describe the bug The download Link of the Xsum dataset provided in the repository is [Link](http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz). It seems not able to download. ## Steps to reproduce the bug ```python load_dataset('xsum') ``` ## Actual results ``` python r...
FYYFU
https://github.com/huggingface/datasets/issues/3232
null
false
1,047,170,906
3,231
Group tests in multiprocessing workers by test file
closed
[]
2021-11-08T08:46:03
2021-11-08T13:19:18
2021-11-08T08:59:44
By grouping tests by test file, we make sure that all the tests in `test_load.py` are sent to the same worker. Therefore, the fixture `hf_token` will be called only once (and from the same worker). Related to: #3200. Fix #3219.
albertvillanova
https://github.com/huggingface/datasets/pull/3231
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3231", "html_url": "https://github.com/huggingface/datasets/pull/3231", "diff_url": "https://github.com/huggingface/datasets/pull/3231.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3231.patch", "merged_at": "2021-11-08T08:59...
true
1,047,135,583
3,230
Add full tagset to conll2003 README
closed
[ "I also added the missing `pretty_name` tag in the dataset card to fix the CI" ]
2021-11-08T08:06:04
2021-11-09T10:48:38
2021-11-09T10:40:58
Even though it is possible to manually get the tagset list with ```python dset.features[field_name].feature.names ``` I think it is useful to have an overview of the used tagset on the dataset card. This is particularly useful in light of the **dataset viewer**: the tags are encoded, so it is not immediately ob...
BramVanroy
https://github.com/huggingface/datasets/pull/3230
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3230", "html_url": "https://github.com/huggingface/datasets/pull/3230", "diff_url": "https://github.com/huggingface/datasets/pull/3230.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3230.patch", "merged_at": "2021-11-09T10:40...
true
1,046,706,425
3,229
Fix URL in CITATION file
closed
[]
2021-11-07T10:04:35
2021-11-07T10:04:46
2021-11-07T10:04:45
Currently the BibTeX citation parsed from the CITATION file has wrong URL (it shows the repo URL instead of the proceedings paper URL): ``` @inproceedings{Lhoest_Datasets_A_Community_2021, author = {Lhoest, Quentin and Villanova del Moral, Albert and von Platen, Patrick and Wolf, Thomas and Ε aΕ‘ko, Mario and Jernite,...
albertvillanova
https://github.com/huggingface/datasets/pull/3229
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3229", "html_url": "https://github.com/huggingface/datasets/pull/3229", "diff_url": "https://github.com/huggingface/datasets/pull/3229.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3229.patch", "merged_at": "2021-11-07T10:04...
true
1,046,702,143
3,228
Add CITATION file
closed
[]
2021-11-07T09:40:19
2021-11-07T09:51:47
2021-11-07T09:51:46
Add CITATION file.
albertvillanova
https://github.com/huggingface/datasets/pull/3228
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3228", "html_url": "https://github.com/huggingface/datasets/pull/3228", "diff_url": "https://github.com/huggingface/datasets/pull/3228.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3228.patch", "merged_at": "2021-11-07T09:51...
true
1,046,667,845
3,227
Error in `Json(datasets.ArrowBasedBuilder)` class
closed
[ "I have additionally identified the source of the error, being that [this condition](https://github.com/huggingface/datasets/blob/fc46bba66ba4f432cc10501c16a677112e13984c/src/datasets/packaged_modules/json/json.py#L124-L126) in the file\r\n`python3.8/site-packages/datasets/packaged_modules/json/json.py` is not bein...
2021-11-07T05:50:32
2021-11-09T19:09:15
2021-11-09T19:09:15
## Describe the bug When a json file contains a `text` field that is larger than the block_size, the JSON dataset builder fails. ## Steps to reproduce the bug Create a folder that contains the following: ``` . β”œβ”€β”€ testdata β”‚Β Β  └── mydata.json └── test.py ``` Please download [this file](https://github.com/...
JunShern
https://github.com/huggingface/datasets/issues/3227
null
false
1,046,584,518
3,226
Fix paper BibTeX citation with proceedings reference
closed
[]
2021-11-06T19:52:59
2021-11-07T07:05:28
2021-11-07T07:05:27
Fix paper BibTeX citation with proceedings reference.
albertvillanova
https://github.com/huggingface/datasets/pull/3226
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3226", "html_url": "https://github.com/huggingface/datasets/pull/3226", "diff_url": "https://github.com/huggingface/datasets/pull/3226.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3226.patch", "merged_at": "2021-11-07T07:05...
true
1,046,530,493
3,225
Update tatoeba to v2021-07-22
closed
[ "How about this? @lhoestq @abhishekkrthakur ", "Hi ! I think it would be nice if people could still be able to load the old version.\r\nMaybe this can be a parameter ? For example to load the old version they could do\r\n```python\r\nload_dataset(\"tatoeba\", lang1=\"en\", lang2=\"mr\", date=\"v2020-11-09\")\r\n`...
2021-11-06T15:14:31
2021-11-12T11:13:13
2021-11-12T11:13:13
Tatoeba's latest version is v2021-07-22
KoichiYasuoka
https://github.com/huggingface/datasets/pull/3225
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3225", "html_url": "https://github.com/huggingface/datasets/pull/3225", "diff_url": "https://github.com/huggingface/datasets/pull/3225.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3225.patch", "merged_at": "2021-11-12T11:13...
true
1,046,495,831
3,224
User-pickling with dynamic sub-classing
closed
[ "@lhoestq Feel free to have a look. The implementation is slightly different from what you suggested. I have opted to overwrite `save` instead of meddling with `save_global`. `save_global` is called very late down in dill/pickle so it is hard to control for what is happening there. I might be wrong. Pickling is mor...
2021-11-06T12:08:24
2025-03-26T19:45:37
2025-03-26T19:45:36
This is a continuation of the now closed PR in https://github.com/huggingface/datasets/pull/3206. The discussion there has shaped a new approach to do this. In this PR, behavior of `pklregister` and `Pickler` is extended. Earlier, users were already able to register custom pickle functions. That is useful if they ha...
BramVanroy
https://github.com/huggingface/datasets/pull/3224
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3224", "html_url": "https://github.com/huggingface/datasets/pull/3224", "diff_url": "https://github.com/huggingface/datasets/pull/3224.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3224.patch", "merged_at": null }
true
1,046,445,507
3,223
Update BibTeX entry
closed
[]
2021-11-06T06:41:52
2021-11-06T07:06:38
2021-11-06T07:06:38
Update BibTeX entry.
albertvillanova
https://github.com/huggingface/datasets/pull/3223
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3223", "html_url": "https://github.com/huggingface/datasets/pull/3223", "diff_url": "https://github.com/huggingface/datasets/pull/3223.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3223.patch", "merged_at": "2021-11-06T07:06...
true
1,046,299,725
3,222
Add docs for audio processing
closed
[ "Nice ! love it this way. I guess you can set this PR to \"ready for review\" ?", "I guess we can merge this one now :)" ]
2021-11-05T23:07:59
2021-11-24T16:32:08
2021-11-24T15:35:52
This PR adds documentation for the `Audio` feature. It describes: - The difference between loading `path` and `audio`, as well as use-cases/best practices for each of them. - Resampling audio files with `cast_column`, and then calling `ds[0]["audio"]` to automatically decode and resample to the desired sampling rat...
stevhliu
https://github.com/huggingface/datasets/pull/3222
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3222", "html_url": "https://github.com/huggingface/datasets/pull/3222", "diff_url": "https://github.com/huggingface/datasets/pull/3222.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3222.patch", "merged_at": "2021-11-24T15:35...
true
1,045,890,512
3,221
Resolve data_files by split name
closed
[ "Really cool!\r\nWhen splitting by folder, what do we use for validation set (\"valid\", \"validation\" or both)?", "> When splitting by folder, what do we use for validation set (\"valid\", \"validation\" or both)?\r\n\r\nBoth are fine :) As soon as it has \"valid\" in it", "Merging for now, if you have commen...
2021-11-05T14:07:35
2021-11-08T13:52:20
2021-11-05T17:49:58
As discussed in https://github.com/huggingface/datasets/issues/3027 we should automatically infer what file is supposed to go to what split automatically, based on filenames. I added the support for different kinds of patterns, for both dataset repositories and local directories: ``` Input structure: ...
lhoestq
https://github.com/huggingface/datasets/pull/3221
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3221", "html_url": "https://github.com/huggingface/datasets/pull/3221", "diff_url": "https://github.com/huggingface/datasets/pull/3221.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3221.patch", "merged_at": "2021-11-05T17:49...
true
1,045,549,029
3,220
Add documentation about dataset viewer feature
open
[ "In particular, include this somewhere in the docs: https://huggingface.co/docs/hub/datasets-viewer#access-the-parquet-files\r\n\r\nSee https://github.com/huggingface/hub-docs/issues/563" ]
2021-11-05T08:11:19
2023-09-25T11:48:38
null
Add to the docs more details about the dataset viewer feature in the Hub. CC: @julien-c
albertvillanova
https://github.com/huggingface/datasets/issues/3220
null
false
1,045,095,000
3,219
Eventual Invalid Token Error at setup of private datasets
closed
[]
2021-11-04T18:50:45
2021-11-08T13:23:06
2021-11-08T08:59:43
## Describe the bug From time to time, there appear Invalid Token errors with private datasets: - https://app.circleci.com/pipelines/github/huggingface/datasets/8520/workflows/d44629f2-4749-40f8-a657-50931d0b3434/jobs/52534 ``` ____________ ERROR at setup of test_load_streaming_private_dataset _____________ ...
albertvillanova
https://github.com/huggingface/datasets/issues/3219
null
false
1,045,032,313
3,218
Fix code quality in riddle_sense dataset
closed
[]
2021-11-04T17:43:20
2021-11-04T17:50:03
2021-11-04T17:50:02
Fix trailing whitespace. Fix #3217.
albertvillanova
https://github.com/huggingface/datasets/pull/3218
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3218", "html_url": "https://github.com/huggingface/datasets/pull/3218", "diff_url": "https://github.com/huggingface/datasets/pull/3218.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3218.patch", "merged_at": "2021-11-04T17:50...
true
1,045,029,710
3,217
Fix code quality bug in riddle_sense dataset
closed
[ "To give more context: https://github.com/psf/black/issues/318. `black` doesn't treat this as a bug, but `flake8` does. \r\n" ]
2021-11-04T17:40:32
2021-11-04T17:50:02
2021-11-04T17:50:02
## Describe the bug ``` datasets/riddle_sense/riddle_sense.py:36:21: W291 trailing whitespace ```
albertvillanova
https://github.com/huggingface/datasets/issues/3217
null
false
1,045,027,733
3,216
Pin version exclusion for tensorflow incompatible with keras
closed
[]
2021-11-04T17:38:06
2021-11-05T10:57:38
2021-11-05T10:57:37
Once `tensorflow` version 2.6.2 is released: - https://github.com/tensorflow/tensorflow/commit/c1867f3bfdd1042f694df7a9870be51ba80543cb - https://pypi.org/project/tensorflow/2.6.2/ with the patch: - tensorflow/tensorflow#52927 we can remove the temporary fix we introduced in: - #3208 Fix #3209.
albertvillanova
https://github.com/huggingface/datasets/pull/3216
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3216", "html_url": "https://github.com/huggingface/datasets/pull/3216", "diff_url": "https://github.com/huggingface/datasets/pull/3216.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3216.patch", "merged_at": "2021-11-05T10:57...
true
1,045,011,207
3,215
Small updates to to_tf_dataset documentation
closed
[ "@stevhliu Accepted both suggestions, thanks for the review!" ]
2021-11-04T17:22:01
2021-11-04T18:55:38
2021-11-04T18:55:37
I added a little more description about `to_tf_dataset` compared to just setting the format
Rocketknight1
https://github.com/huggingface/datasets/pull/3215
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3215", "html_url": "https://github.com/huggingface/datasets/pull/3215", "diff_url": "https://github.com/huggingface/datasets/pull/3215.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3215.patch", "merged_at": "2021-11-04T18:55...
true
1,044,924,050
3,214
Add ACAV100M Dataset
open
[]
2021-11-04T15:59:58
2021-12-08T12:00:30
null
## Adding a Dataset - **Name:** *ACAV100M* - **Description:** *contains 100 million videos with high audio-visual correspondence, ideal for self-supervised video representation learning.* - **Paper:** *https://arxiv.org/abs/2101.10803* - **Data:** *https://github.com/sangho-vision/acav100m* - **Motivation:** *The ...
nateraw
https://github.com/huggingface/datasets/issues/3214
null
false
1,044,745,313
3,213
Fix tuple_ie download url
closed
[]
2021-11-04T13:09:07
2021-11-05T14:16:06
2021-11-05T14:16:05
Fix #3204
mariosasko
https://github.com/huggingface/datasets/pull/3213
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3213", "html_url": "https://github.com/huggingface/datasets/pull/3213", "diff_url": "https://github.com/huggingface/datasets/pull/3213.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3213.patch", "merged_at": "2021-11-05T14:16...
true
1,044,640,967
3,212
Sort files before loading
closed
[ "This will be fixed by https://github.com/huggingface/datasets/pull/3221" ]
2021-11-04T11:08:31
2021-11-05T17:49:58
2021-11-05T17:49:58
When loading a dataset that consists of several files (e.g. `my_data/data_001.json`, `my_data/data_002.json` etc.) they are not loaded in order when using `load_dataset("my_data")`. This could lead to counter-intuitive results if, for example, the data files are sorted by date or similar since they would appear in d...
lvwerra
https://github.com/huggingface/datasets/issues/3212
null
false
1,044,617,913
3,211
Fix disable_nullable default value to False
closed
[]
2021-11-04T10:52:06
2021-11-04T11:08:21
2021-11-04T11:08:20
Currently the `disable_nullable` parameter is not consistent across all dataset transforms. For example it is `False` in `map` but `True` in `flatten_indices`. This creates unexpected behaviors like this ```python from datasets import Dataset, concatenate_datasets d1 = Dataset.from_dict({"a": [0, 1, 2, 3]}) d2...
lhoestq
https://github.com/huggingface/datasets/pull/3211
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3211", "html_url": "https://github.com/huggingface/datasets/pull/3211", "diff_url": "https://github.com/huggingface/datasets/pull/3211.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3211.patch", "merged_at": "2021-11-04T11:08...
true
1,044,611,471
3,210
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py
closed
[ "Hi ! Do you have some kind of proxy in your browser that gives you access to internet ?\r\n\r\nMaybe you're having this error because you don't have access to this URL from python ?", "Hi,do you fixed this error?\r\nI still have this issue when use \"use_auth_token=True\"", "You don't need authentication to ac...
2021-11-04T10:47:26
2022-03-30T08:26:35
2022-03-30T08:26:35
when I use python examples/pytorch/translation/run_translation.py --model_name_or_path examples/pytorch/translation/opus-mt-en-ro --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir /tmp/tst-translation --per_device_tra...
xiuzhilu
https://github.com/huggingface/datasets/issues/3210
null
false
1,044,505,771
3,209
Unpin keras once TF fixes its release
closed
[]
2021-11-04T09:15:32
2021-11-05T10:57:37
2021-11-05T10:57:37
Related to: - #3208
albertvillanova
https://github.com/huggingface/datasets/issues/3209
null
false
1,044,504,093
3,208
Pin keras version until TF fixes its release
closed
[]
2021-11-04T09:13:32
2021-11-04T09:30:55
2021-11-04T09:30:54
Fix #3207.
albertvillanova
https://github.com/huggingface/datasets/pull/3208
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3208", "html_url": "https://github.com/huggingface/datasets/pull/3208", "diff_url": "https://github.com/huggingface/datasets/pull/3208.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3208.patch", "merged_at": "2021-11-04T09:30...
true
1,044,496,389
3,207
CI error: Another metric with the same name already exists in Keras 2.7.0
closed
[]
2021-11-04T09:04:11
2021-11-04T09:30:54
2021-11-04T09:30:54
## Describe the bug Release of TensorFlow 2.7.0 contains an incompatibility with Keras. See: - keras-team/keras#15579 This breaks our CI test suite: https://app.circleci.com/pipelines/github/huggingface/datasets/8493/workflows/055c7ae2-43bc-49b4-9f11-8fc71f35a25c/jobs/52363
albertvillanova
https://github.com/huggingface/datasets/issues/3207
null
false
1,044,216,270
3,206
[WIP] Allow user-defined hash functions via a registry
closed
[ "Hi @BramVanroy, thanks for your PR.\r\n\r\nThere was a bug in TensorFlow/Keras. We have made a temporary fix in master branch. Please, merge master into your PR branch, so that the CI tests pass.\r\n\r\n```\r\ngit checkout registry\r\ngit fetch upstream master\r\ngit merge upstream/master\r\n```", "@albertvillan...
2021-11-03T23:25:42
2021-11-05T12:38:11
2021-11-05T12:38:04
Inspired by the discussion on hashing in https://github.com/huggingface/datasets/issues/3178#issuecomment-959016329, @lhoestq suggested that it would be neat to allow users more control over the hashing process. Specifically, it would be great if users can specify specific hashing functions depending on the **class** o...
BramVanroy
https://github.com/huggingface/datasets/pull/3206
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3206", "html_url": "https://github.com/huggingface/datasets/pull/3206", "diff_url": "https://github.com/huggingface/datasets/pull/3206.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3206.patch", "merged_at": null }
true
1,044,099,561
3,205
Add Multidoc2dial Dataset
closed
[ "@songfeng cc", "Hi @sivasankalpp, thanks for your PR.\r\n\r\nThere was a bug in TensorFlow/Keras. We have made a temporary fix in our master branch. Please, merge master into your PR branch, so that the CI tests pass.\r\n\r\n```\r\ngit checkout multidoc2dial\r\ngit fetch upstream master\r\ngit merge upstream/mas...
2021-11-03T20:48:31
2021-11-24T17:32:49
2021-11-24T16:55:08
This PR adds the MultiDoc2Dial dataset introduced in this [paper](https://arxiv.org/pdf/2109.12595v1.pdf )
sivasankalpp
https://github.com/huggingface/datasets/pull/3205
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3205", "html_url": "https://github.com/huggingface/datasets/pull/3205", "diff_url": "https://github.com/huggingface/datasets/pull/3205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3205.patch", "merged_at": "2021-11-24T16:55...
true
1,043,707,307
3,204
FileNotFoundError for TupleIE dataste
closed
[ "@mariosasko @lhoestq Could you give me an update on how to load the dataset after the fix?\r\nThanks.", "Hi @arda-vianai,\r\n\r\nfirst, you can try:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('tuple_ie', 'all', revision=\"master\")\r\n```\r\nIf this doesn't work, your version of `datasets...
2021-11-03T14:56:55
2021-11-05T15:51:15
2021-11-05T14:16:05
Hi, `dataset = datasets.load_dataset('tuple_ie', 'all')` returns a FileNotFound error. Is the data not available? Many thanks.
arda-vianai
https://github.com/huggingface/datasets/issues/3204
null
false
1,043,552,766
3,203
Updated: DaNE - updated URL for download
closed
[ "Actually it looks like the old URL is still working, and it's also the one that is mentioned in https://github.com/alexandrainst/danlp/blob/master/docs/docs/datasets.md\r\n\r\nWhat makes you think we should use the new URL ?", "@lhoestq Sorry! I might have jumped to conclusions a bit too fast here... \r\n\r\nI w...
2021-11-03T12:55:13
2021-11-04T13:14:36
2021-11-04T11:46:43
It seems that DaNLP has updated their download URLs and it therefore also needs to be updated in here...
MalteHB
https://github.com/huggingface/datasets/pull/3203
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3203", "html_url": "https://github.com/huggingface/datasets/pull/3203", "diff_url": "https://github.com/huggingface/datasets/pull/3203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3203.patch", "merged_at": "2021-11-04T11:46...
true
1,043,213,660
3,202
Add mIoU metric
closed
[ "Resolved via https://github.com/huggingface/datasets/pull/3745." ]
2021-11-03T08:42:32
2022-06-01T17:39:05
2022-06-01T17:39:04
**Is your feature request related to a problem? Please describe.** Recently, some semantic segmentation models were added to HuggingFace Transformers, including [SegFormer](https://huggingface.co/transformers/model_doc/segformer.html) and [BEiT](https://huggingface.co/transformers/model_doc/beit.html). Semantic seg...
NielsRogge
https://github.com/huggingface/datasets/issues/3202
null
false
1,043,209,142
3,201
Add GSM8K dataset
closed
[ "Closed via https://github.com/huggingface/datasets/pull/4103" ]
2021-11-03T08:36:44
2022-04-13T11:56:12
2022-04-13T11:56:11
## Adding a Dataset - **Name:** GSM8K (short for Grade School Math 8k) - **Description:** GSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers. - **Paper:** https://openai.com/blog/grade-school-math/ - **Data:** https://github.com/openai/gra...
NielsRogge
https://github.com/huggingface/datasets/issues/3201
null
false
1,042,887,291
3,200
Catch token invalid error in CI
closed
[]
2021-11-02T21:56:26
2021-11-03T09:41:08
2021-11-03T09:41:08
The staging back end sometimes returns invalid token errors when trying to delete a repo. I modified the fixture in the test that uses staging to ignore this error
lhoestq
https://github.com/huggingface/datasets/pull/3200
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3200", "html_url": "https://github.com/huggingface/datasets/pull/3200", "diff_url": "https://github.com/huggingface/datasets/pull/3200.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3200.patch", "merged_at": "2021-11-03T09:41...
true
1,042,860,935
3,199
Bump huggingface_hub
closed
[]
2021-11-02T21:29:10
2021-11-14T01:48:11
2021-11-02T21:41:40
huggingface_hub just released its first minor version, so we need to update the dependency It was supposed to be part of 1.15.0 but I'm adding it for 1.15.1
lhoestq
https://github.com/huggingface/datasets/pull/3199
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3199", "html_url": "https://github.com/huggingface/datasets/pull/3199", "diff_url": "https://github.com/huggingface/datasets/pull/3199.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3199.patch", "merged_at": "2021-11-02T21:41...
true
1,042,679,548
3,198
Add Multi-Lingual LibriSpeech
closed
[]
2021-11-02T18:23:59
2021-11-04T17:09:22
2021-11-04T17:09:22
Add https://www.openslr.org/94/
patrickvonplaten
https://github.com/huggingface/datasets/pull/3198
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3198", "html_url": "https://github.com/huggingface/datasets/pull/3198", "diff_url": "https://github.com/huggingface/datasets/pull/3198.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3198.patch", "merged_at": "2021-11-04T17:09...
true
1,042,541,127
3,197
Fix optimized encoding for arrays
closed
[]
2021-11-02T15:55:53
2021-11-02T19:12:24
2021-11-02T19:12:23
Hi ! #3124 introduced a regression that made the benchmarks CI fail because of a bad array comparison when checking the first encoded element. This PR fixes this by making sure that encoding is applied on all sequence types except lists. cc @eladsegal fyi (no big deal)
lhoestq
https://github.com/huggingface/datasets/pull/3197
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3197", "html_url": "https://github.com/huggingface/datasets/pull/3197", "diff_url": "https://github.com/huggingface/datasets/pull/3197.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3197.patch", "merged_at": "2021-11-02T19:12...
true
1,042,223,913
3,196
QOL improvements: auto-flatten_indices and desc in map calls
closed
[]
2021-11-02T11:28:50
2021-11-02T15:41:09
2021-11-02T15:41:08
This PR: * automatically calls `flatten_indices` where needed: in `unique` and `save_to_disk` to avoid saving the indices file * adds descriptions to the map calls Fix #3040
mariosasko
https://github.com/huggingface/datasets/pull/3196
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3196", "html_url": "https://github.com/huggingface/datasets/pull/3196", "diff_url": "https://github.com/huggingface/datasets/pull/3196.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3196.patch", "merged_at": "2021-11-02T15:41...
true
1,042,204,044
3,195
More robust `None` handling
closed
[ "I also created a PR regarding `disable_nullable` that must be always `False` by default, in order to always allow None values\r\nhttps://github.com/huggingface/datasets/pull/3211", "@lhoestq I addressed your comments, added tests, did some refactoring to make the implementation cleaner and added support for `Non...
2021-11-02T11:15:10
2021-12-09T14:27:00
2021-12-09T14:26:58
PyArrow has explicit support for `null` values, so it makes sense to support Nones on our side as well. [Colab Notebook with examples](https://colab.research.google.com/drive/1zcK8BnZYnRe3Ao2271u1T19ag9zLEiy3?usp=sharing) Changes: * allow None for the features types with special encoding (`ClassLabel, Translatio...
mariosasko
https://github.com/huggingface/datasets/pull/3195
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3195", "html_url": "https://github.com/huggingface/datasets/pull/3195", "diff_url": "https://github.com/huggingface/datasets/pull/3195.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3195.patch", "merged_at": "2021-12-09T14:26...
true
1,041,999,535
3,194
Update link to Datasets Tagging app in Spaces
closed
[]
2021-11-02T08:13:50
2021-11-08T10:36:23
2021-11-08T10:36:22
Fix #3193.
albertvillanova
https://github.com/huggingface/datasets/pull/3194
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3194", "html_url": "https://github.com/huggingface/datasets/pull/3194", "diff_url": "https://github.com/huggingface/datasets/pull/3194.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3194.patch", "merged_at": "2021-11-08T10:36...
true
1,041,971,117
3,193
Update link to datasets-tagging app
closed
[]
2021-11-02T07:39:59
2021-11-08T10:36:22
2021-11-08T10:36:22
Once datasets-tagging has been transferred to Spaces: - huggingface/datasets-tagging#22 We should update the link in Datasets.
albertvillanova
https://github.com/huggingface/datasets/issues/3193
null
false
1,041,308,086
3,192
Multiprocessing filter/map (tests) not working on Windows
open
[]
2021-11-01T15:36:08
2021-11-01T15:57:03
null
While running the tests, I found that the multiprocessing examples fail on Windows, or rather they do not complete: they cause a deadlock. I haven't dug deep into it, but they do not seem to work as-is. I currently have no time to tests this in detail but at least the tests seem not to run correctly (deadlocking). #...
BramVanroy
https://github.com/huggingface/datasets/issues/3192
null
false
1,041,225,111
3,191
Dataset viewer issue for '*compguesswhat*'
closed
[ "```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('compguesswhat', name='compguesswhat-original',split='train', streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.ve...
2021-11-01T14:16:49
2022-09-12T08:02:29
2022-09-12T08:02:29
## Dataset viewer issue for '*compguesswhat*' **Link:** https://huggingface.co/datasets/compguesswhat File not found Am I the one who added this dataset ? No
benotti
https://github.com/huggingface/datasets/issues/3191
null
false
1,041,153,631
3,190
combination of shuffle and filter results in a bug
closed
[ "I cannot reproduce this on master and pyarrow==4.0.1.\r\n", "Hi ! There was a regression in `datasets` 1.12 that introduced this bug. It has been fixed in #3019 in 1.13\r\n\r\nCan you try to update `datasets` and try again ?", "Thanks a lot, fixes with 1.13" ]
2021-11-01T13:07:29
2021-11-02T10:50:49
2021-11-02T10:50:49
## Describe the bug Hi, I would like to shuffle a dataset, then filter it based on each existing label. however, the combination of `filter`, `shuffle` seems to results in a bug. In the minimal example below, as you see in the filtered results, the filtered labels are not unique, meaning filter has not worked. Any su...
rabeehk
https://github.com/huggingface/datasets/issues/3190
null
false
1,041,044,986
3,189
conll2003 incorrect label explanation
closed
[ "Hi @BramVanroy,\r\n\r\nsince these fields are of type `ClassLabel` (you can check this with `dset.features`), you can inspect the possible values with:\r\n```python\r\ndset.features[field_name].feature.names # .feature because it's a sequence of labels\r\n```\r\n\r\nand to find the mapping between names and integ...
2021-11-01T11:03:30
2021-11-09T10:40:58
2021-11-09T10:40:58
In the [conll2003](https://huggingface.co/datasets/conll2003#data-fields) README, the labels are described as follows > - `id`: a `string` feature. > - `tokens`: a `list` of `string` features. > - `pos_tags`: a `list` of classification labels, with possible values including `"` (0), `''` (1), `#` (2), `$` (3), `(`...
BramVanroy
https://github.com/huggingface/datasets/issues/3189
null
false
1,040,980,712
3,188
conll2002 issues
closed
[ "Hi ! Thanks for reporting :)\r\n\r\nThis is related to https://github.com/huggingface/datasets/issues/2742, I'm working on it. It should fix the viewer for around 80 datasets.\r\n", "Ah, hadn't seen that sorry.\r\n\r\nThe scrambled \"point of contact\" is a separate issue though, I think.", "@lhoestq The \"poi...
2021-11-01T09:49:24
2021-11-15T13:50:59
2021-11-12T17:18:11
**Link:** https://huggingface.co/datasets/conll2002 The dataset viewer throws a server error when trying to preview the dataset. ``` Message: Extraction protocol 'train' for file at 'https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/esp.train' is not implemented yet ``` I...
BramVanroy
https://github.com/huggingface/datasets/issues/3188
null
false
1,040,412,869
3,187
Add ChrF(++) (as implemented in sacrebleu)
closed
[]
2021-10-31T08:53:58
2021-11-02T14:50:50
2021-11-02T14:31:26
Similar to my [PR for TER](https://github.com/huggingface/datasets/pull/3153), it feels only right to also include ChrF and friends. These are present in Sacrebleu and are therefore very similar to implement as TER and sacrebleu. I tested the implementation with sacrebleu's tests to verify. You can try this below for y...
BramVanroy
https://github.com/huggingface/datasets/pull/3187
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3187", "html_url": "https://github.com/huggingface/datasets/pull/3187", "diff_url": "https://github.com/huggingface/datasets/pull/3187.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3187.patch", "merged_at": "2021-11-02T14:31...
true
1,040,369,397
3,186
Dataset viewer for nli_tr
closed
[ "It's an issue with the streaming mode:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('nli_tr', name='snli_tr',split='test', streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datas...
2021-10-31T03:56:33
2022-09-12T09:15:34
2022-09-12T08:43:09
## Dataset viewer issue for '*nli_tr*' **Link:** https://huggingface.co/datasets/nli_tr Hello, Thank you for the new dataset preview feature that will help the users to view the datasets online. We just noticed that the dataset viewer widget in the `nli_tr` dataset shows the error below. The error must be d...
e-budur
https://github.com/huggingface/datasets/issues/3186
null
false
1,040,291,961
3,185
7z dataset preview not implemented?
closed
[ "It's a bug in the dataset viewer: the dataset cannot be downloaded in streaming mode, but since the dataset is relatively small, the dataset viewer should have fallback to normal mode. Working on a fix.", "Fixed. https://huggingface.co/datasets/samsum/viewer/samsum/train\r\n\r\n<img width=\"1563\" alt=\"Capture ...
2021-10-30T20:18:27
2022-04-12T11:48:16
2022-04-12T11:48:07
## Dataset viewer issue for dataset 'samsum' **Link:** https://huggingface.co/datasets/samsum Server Error Status code: 400 Exception: NotImplementedError Message: Extraction protocol '7z' for file at 'https://arxiv.org/src/1911.12237v2/anc/corpus.7z' is not implemented yet
Kirili4ik
https://github.com/huggingface/datasets/issues/3185
null
false
1,040,114,102
3,184
RONEC v2
closed
[ "@lhoestq Thanks for the review. I totally understand what you are saying. Normally, I would definitely agree with you, but in this particular case, the quality of v1 is poor, and the dataset itself is small (at the time we created v1 it was the only RO NER dataset, and its size was limited by the available resourc...
2021-10-30T10:50:03
2021-11-02T16:02:23
2021-11-02T16:02:22
Hi, as we've recently finished with the new RONEC (Romanian Named Entity Corpus), we'd like to update the dataset here as well. It's actually essential as links to V1 are no longer valid. In reality we'd like to replace completely v1, as v2 is a full re-annotation of v1 with additional data (up to 2x size vs v1). ...
dumitrescustefan
https://github.com/huggingface/datasets/pull/3184
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3184", "html_url": "https://github.com/huggingface/datasets/pull/3184", "diff_url": "https://github.com/huggingface/datasets/pull/3184.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3184.patch", "merged_at": "2021-11-02T16:02...
true
1,039,761,120
3,183
Add missing docstring to DownloadConfig
closed
[]
2021-10-29T16:56:35
2021-11-02T10:25:38
2021-11-02T10:25:37
Document the `use_etag` and `num_proc` attributes in `DownloadConig`.
mariosasko
https://github.com/huggingface/datasets/pull/3183
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3183", "html_url": "https://github.com/huggingface/datasets/pull/3183", "diff_url": "https://github.com/huggingface/datasets/pull/3183.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3183.patch", "merged_at": "2021-11-02T10:25...
true
1,039,739,606
3,182
Don't memoize strings when hashing since two identical strings may have different python ids
closed
[ "This change slows down the hash computation a little bit but from my tests it doesn't look too impactful. So I think it's fine to merge this." ]
2021-10-29T16:26:17
2021-11-02T09:35:38
2021-11-02T09:35:37
When hashing an object that has several times the same string, the hashing could return a different hash if the identical strings share the same python `id()` or not. Here is an example code that shows how the issue can affect the caching: ```python import json import pyarrow as pa from datasets.features import ...
lhoestq
https://github.com/huggingface/datasets/pull/3182
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3182", "html_url": "https://github.com/huggingface/datasets/pull/3182", "diff_url": "https://github.com/huggingface/datasets/pull/3182.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3182.patch", "merged_at": "2021-11-02T09:35...
true
1,039,682,097
3,181
`None` converted to `"None"` when loading a dataset
closed
[ "Hi @eladsegal, thanks for reporting.\r\n\r\n@mariosasko I saw you are already working on this, but maybe my comment will be useful to you.\r\n\r\nAll values are casted to their corresponding feature type (including `None` values). For example if the feature type is `Value(\"bool\")`, `None` is casted to `False`.\r...
2021-10-29T15:23:53
2021-12-11T01:16:40
2021-12-09T14:26:57
## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists") print(qasper[60]["full_text...
eladsegal
https://github.com/huggingface/datasets/issues/3181
null
false
1,039,641,316
3,180
fix label mapping
closed
[ "heck, test failings. moving to draft. will come back to this later today hopefully", "Thanks for fixing this :)\r\nI just updated the dataset_infos.json and added the missing `pretty_name` tag to the dataset card", "thank you @lhoestq! running around as always it felt through as a lower priority..." ]
2021-10-29T14:42:24
2021-11-02T13:41:07
2021-11-02T10:37:12
Fixing label mapping for hlgd. 0 correponds to same event and 1 corresponds to different event <img width="642" alt="Capture d’écran 2021-10-29 aΜ€ 10 39 58 AM" src="https://user-images.githubusercontent.com/16107619/139454810-1f225e3d-ad48-44a8-b8b1-9205c9533839.png"> <img width="638" alt="Capture d’écran 2021-10-...
VictorSanh
https://github.com/huggingface/datasets/pull/3180
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3180", "html_url": "https://github.com/huggingface/datasets/pull/3180", "diff_url": "https://github.com/huggingface/datasets/pull/3180.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3180.patch", "merged_at": "2021-11-02T10:37...
true
1,039,571,928
3,179
Cannot load dataset when the config name is "special"
closed
[ "The issue is that the datasets are malformed. Not a bug with the datasets library" ]
2021-10-29T13:30:47
2021-10-29T13:35:21
2021-10-29T13:35:21
## Describe the bug After https://github.com/huggingface/datasets/pull/3159, we can get the config name of "Check/region_1", which is "Check___region_1". But now we cannot load the dataset (not sure it's related to the above PR though). It's the case for all the similar datasets, listed in https://github.com/hugg...
severo
https://github.com/huggingface/datasets/issues/3179
null
false
1,039,539,076
3,178
"Property couldn't be hashed properly" even though fully picklable
closed
[ "After some digging, I found that this is caused by `dill` and using `recurse=True)` when trying to dump the object. The problem also occurs without multiprocessing. I can only find [the following information](https://dill.readthedocs.io/en/latest/dill.html#dill._dill.dumps) about this:\r\n\r\n> If recurse=True, th...
2021-10-29T12:56:09
2024-08-19T13:03:49
2022-11-02T17:18:43
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ...
BramVanroy
https://github.com/huggingface/datasets/issues/3178
null
false
1,039,487,780
3,177
More control over TQDM when using map/filter with multiple processes
closed
[ "Hi,\r\n\r\nIt's hard to provide an API that would cover all use-cases with tqdm in this project.\r\n\r\nHowever, you can make it work by defining a custom decorator (a bit hacky tho) as follows:\r\n```python\r\nimport datasets\r\n\r\ndef progress_only_on_rank_0(func):\r\n def wrapper(*args, **kwargs):\r\n ...
2021-10-29T11:56:16
2023-02-13T20:16:40
2023-02-13T20:16:40
It would help with the clutter in my terminal if tqdm is only shown for rank 0 when using `num_proces>0` in the map and filter methods of datasets. ```python dataset.map(lambda examples: tokenize(examples["text"]), batched=True, num_proc=6) ``` The above snippet leads to a lot of TQDM bars and depending on your...
BramVanroy
https://github.com/huggingface/datasets/issues/3177
null
false
1,039,068,312
3,176
OpenSLR dataset: update generate_examples to properly extract data for SLR83
closed
[ "Also fix #3125." ]
2021-10-29T00:59:27
2021-11-04T16:20:45
2021-10-29T10:04:09
Fixed #3168. The SLR38 indices are CSV files and there wasn't any code in openslr.py to process these files properly. The end result was an empty table. I've added code to properly process these CSV files.
tyrius02
https://github.com/huggingface/datasets/pull/3176
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3176", "html_url": "https://github.com/huggingface/datasets/pull/3176", "diff_url": "https://github.com/huggingface/datasets/pull/3176.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3176.patch", "merged_at": "2021-10-29T10:04...
true
1,038,945,271
3,175
Add docs for `to_tf_dataset`
closed
[ "This looks great, thank you!", "Thanks !\r\n\r\nFor some reason the new GIF is 6MB, which is a bit heavy for an image on a website. The previous one was around 200KB though which is perfect. For a good experience we usually expect images to be less than 500KB - otherwise for users with poor connection it takes t...
2021-10-28T20:55:22
2021-11-03T15:39:36
2021-11-03T10:07:23
This PR adds some documentation for new features released in v1.13.0, with the main addition being `to_tf_dataset`: - Show how to use `to_tf_dataset` in the tutorial, and move `set_format(type='tensorflow'...)` to the Process section (let me know if I'm missing anything @Rocketknight1 πŸ˜…). - Add an example for load...
stevhliu
https://github.com/huggingface/datasets/pull/3175
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3175", "html_url": "https://github.com/huggingface/datasets/pull/3175", "diff_url": "https://github.com/huggingface/datasets/pull/3175.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3175.patch", "merged_at": "2021-11-03T10:07...
true
1,038,427,245
3,174
Asserts replaced by exceptions (huggingface#3171)
closed
[ "Your first PR went smoothly, well done!\r\nYou are welcome to continue contributing to this project.\r\nGrΓ cies, @joseporiolayats! πŸ˜‰ " ]
2021-10-28T11:55:45
2021-11-06T06:35:32
2021-10-29T13:08:43
I've replaced two asserts with their proper exceptions following the guidelines described in issue #3171 by following the contributing guidelines. PS: This is one of my first PRs, hoping I don't break anything!
joseporiolayats
https://github.com/huggingface/datasets/pull/3174
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3174", "html_url": "https://github.com/huggingface/datasets/pull/3174", "diff_url": "https://github.com/huggingface/datasets/pull/3174.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3174.patch", "merged_at": "2021-10-29T13:08...
true