id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
โŒ€
body
stringlengths
0
228k
โŒ€
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
1,102,059,651
3,576
Add PASS dataset
closed
[]
2022-01-13T17:16:07
2022-01-20T16:50:48
2022-01-20T16:50:47
This PR adds the PASS dataset. Closes #3043
mariosasko
https://github.com/huggingface/datasets/pull/3576
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3576", "html_url": "https://github.com/huggingface/datasets/pull/3576", "diff_url": "https://github.com/huggingface/datasets/pull/3576.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3576.patch", "merged_at": "2022-01-20T16:50...
true
1,101,947,955
3,575
Add Arrow type casting to struct for Image and Audio + Support nested casting
closed
[ "Regarding the tests I'm just missing the FixedSizeListType type casting for ListArray objects, will to it tomorrow as well as adding new tests + docstrings\r\n\r\nand also adding soundfile in the CI", "While writing some tests I noticed that the ExtensionArray can't be directly concatenated - maybe we can get ri...
2022-01-13T15:36:59
2022-11-29T11:14:16
2022-01-21T13:22:27
## Intro 1. Currently, it's not possible to have nested features containing Audio or Image. 2. Moreover one can keep an Arrow array as a StringArray to store paths to images, but such arrays can't be directly concatenated to another image array if it's stored an another Arrow type (typically, a StructType). 3...
lhoestq
https://github.com/huggingface/datasets/pull/3575
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3575", "html_url": "https://github.com/huggingface/datasets/pull/3575", "diff_url": "https://github.com/huggingface/datasets/pull/3575.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3575.patch", "merged_at": "2022-01-21T13:22...
true
1,101,781,401
3,574
Fix qa4mre tags
closed
[]
2022-01-13T13:56:59
2022-01-13T14:03:02
2022-01-13T14:03:01
The YAML tags were invalid. I also fixed the dataset mirroring logging that failed because of this issue [here](https://github.com/huggingface/datasets/actions/runs/1690109581)
lhoestq
https://github.com/huggingface/datasets/pull/3574
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3574", "html_url": "https://github.com/huggingface/datasets/pull/3574", "diff_url": "https://github.com/huggingface/datasets/pull/3574.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3574.patch", "merged_at": "2022-01-13T14:03...
true
1,101,157,676
3,573
Add Mauve metric
closed
[ "Hi ! The CI was failing because `mauve-text` wasn't installed. I added it to the CI setup :)\r\n\r\nI also did some minor changes to the script itself, especially to remove `**kwargs` and explicitly mentioned all the supported arguments (this way if someone does a typo with some parameters they get an error)" ]
2022-01-13T03:52:48
2022-01-20T15:00:08
2022-01-20T15:00:08
Add support for the [Mauve](https://github.com/krishnap25/mauve) metric introduced in this [paper](https://arxiv.org/pdf/2102.01454.pdf) (Neurips, 2021).
jthickstun
https://github.com/huggingface/datasets/pull/3573
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3573", "html_url": "https://github.com/huggingface/datasets/pull/3573", "diff_url": "https://github.com/huggingface/datasets/pull/3573.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3573.patch", "merged_at": "2022-01-20T15:00...
true
1,100,634,244
3,572
ConnectionError in IndicGLUE dataset
closed
[ "@sahoodib, thanks for reporting.\r\n\r\nIndeed, none of the data links appearing in the IndicGLUE website are working, e.g.: https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/evaluations/soham-articles.tar.gz\r\n```\r\n<Error>\r\n<Code>UserProjectAccountProblem</Code>\r\n<Message>User project billi...
2022-01-12T17:59:36
2022-09-15T21:57:34
2022-09-15T21:57:34
While I am trying to load IndicGLUE dataset (https://huggingface.co/datasets/indic_glue) it is giving me with the error: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/evaluations/wikiann-ner.tar.gz (error 403)
sahoodib
https://github.com/huggingface/datasets/issues/3572
null
false
1,100,519,604
3,571
Add missing tasks to MuchoCine dataset
closed
[]
2022-01-12T16:07:32
2022-01-20T16:51:08
2022-01-20T16:51:07
Addresses the 2nd bullet point in #2520. I'm also removing the licensing information, because I couldn't verify that it is correct.
mariosasko
https://github.com/huggingface/datasets/pull/3571
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3571", "html_url": "https://github.com/huggingface/datasets/pull/3571", "diff_url": "https://github.com/huggingface/datasets/pull/3571.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3571.patch", "merged_at": "2022-01-20T16:51...
true
1,100,480,791
3,570
Add the KMWP dataset (extension of #3564)
closed
[ "Sorry, I'm late to check! I'll send it to you soon!", "Thanks for your contribution, @sooftware. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you c...
2022-01-12T15:33:08
2022-10-01T06:43:16
2022-10-01T06:43:16
New pull request of #3564 (Add the KMWP dataset)
sooftware
https://github.com/huggingface/datasets/pull/3570
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3570", "html_url": "https://github.com/huggingface/datasets/pull/3570", "diff_url": "https://github.com/huggingface/datasets/pull/3570.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3570.patch", "merged_at": null }
true
1,100,478,994
3,569
Add the DKTC dataset (Extension of #3564)
closed
[ "I reflect your comment! @lhoestq ", "Wait, the format of the data just changed, so I'll take it into consideration and commit it.", "I update the code according to the dataset structure change.", "Thanks ! I think the dummy data are not valid yet - the dummy train.csv file only contains a partial example (th...
2022-01-12T15:31:29
2022-10-01T06:43:05
2022-10-01T06:43:04
New pull request of #3564. (for DKTC)
sooftware
https://github.com/huggingface/datasets/pull/3569
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3569", "html_url": "https://github.com/huggingface/datasets/pull/3569", "diff_url": "https://github.com/huggingface/datasets/pull/3569.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3569.patch", "merged_at": null }
true
1,100,380,631
3,568
Downloading Hugging Face Medical Dialog Dataset NonMatchingSplitsSizesError
closed
[ "Hi @fabianslife, thanks for reporting.\r\n\r\nI think you were using an old version of `datasets` because this bug was already fixed in version `1.13.0` (13 Oct 2021):\r\n- Fix: 55fd140a63b8f03a0e72985647e498f1fc799d3f\r\n- PR: #3046\r\n- Issue: #2969 \r\n\r\nPlease, feel free to update the library: `pip install -...
2022-01-12T14:03:44
2022-02-14T09:32:34
2022-02-14T09:32:34
I wanted to download the Nedical Dialog Dataset from huggingface, using this github link: https://github.com/huggingface/datasets/tree/master/datasets/medical_dialog After downloading the raw datasets from google drive, i unpacked everything and put it in the same folder as the medical_dialog.py which is: ``` ...
fabianslife
https://github.com/huggingface/datasets/issues/3568
null
false
1,100,296,696
3,567
Fix push to hub to allow individual split push
closed
[ "This has been addressed in https://github.com/huggingface/datasets/pull/4415. Closing." ]
2022-01-12T12:42:58
2023-09-24T09:54:19
2022-07-27T12:11:11
# Description of the issue If one decides to push a split on a datasets repo, he uploads the dataset and overrides the config. However previous config splits end up being lost despite still having the dataset necessary. The new flow is the following: - query the old config from the repo - update into a new co...
thomasw21
https://github.com/huggingface/datasets/pull/3567
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3567", "html_url": "https://github.com/huggingface/datasets/pull/3567", "diff_url": "https://github.com/huggingface/datasets/pull/3567.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3567.patch", "merged_at": null }
true
1,100,155,902
3,566
Add initial electricity time series dataset
closed
[ "@kashif Some commits on the PR branch are not authored by you, so could you please open a new PR and not use rebase this time :)? You can copy and paste the dataset dir to the new branch. \r\n\r\n", "making a new PR" ]
2022-01-12T10:21:32
2022-02-15T13:31:48
2022-02-15T13:31:48
Here is an initial prototype time series dataset
kashif
https://github.com/huggingface/datasets/pull/3566
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3566", "html_url": "https://github.com/huggingface/datasets/pull/3566", "diff_url": "https://github.com/huggingface/datasets/pull/3566.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3566.patch", "merged_at": null }
true
1,099,296,693
3,565
Add parameter `preserve_index` to `from_pandas`
closed
[ "> \r\n\r\nI did `make style` and it affected over 500 files\r\n\r\n```\r\nAll done! โœจ ๐Ÿฐ โœจ\r\n575 files reformatted, 372 files left unchanged.\r\nisort tests src benchmarks datasets/**/*.py metri\r\n```\r\n\r\n(result)\r\n![image](https://user-images.githubusercontent.com/20703486/149166681-2f9d1bc4-116a-4f53-ad42...
2022-01-11T15:26:37
2022-01-12T16:11:27
2022-01-12T16:11:27
Added optional parameter, so that user can get rid of useless index preserving. [Issue](https://github.com/huggingface/datasets/issues/3563)
Sorrow321
https://github.com/huggingface/datasets/pull/3565
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3565", "html_url": "https://github.com/huggingface/datasets/pull/3565", "diff_url": "https://github.com/huggingface/datasets/pull/3565.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3565.patch", "merged_at": "2022-01-12T16:11...
true
1,099,214,403
3,564
Add the KMWP & DKTC dataset.
closed
[ "I reflect your review. cc. @lhoestq ", "Ah sorry, I missed KMWP comment, wait.", "I request 2 new pull requests. #3569 #3570" ]
2022-01-11T14:14:08
2022-01-12T15:33:49
2022-01-12T15:33:28
Add the DKTC dataset. - https://github.com/tunib-ai/DKTC
sooftware
https://github.com/huggingface/datasets/pull/3564
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3564", "html_url": "https://github.com/huggingface/datasets/pull/3564", "diff_url": "https://github.com/huggingface/datasets/pull/3564.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3564.patch", "merged_at": null }
true
1,099,070,368
3,563
Dataset.from_pandas preserves useless index
closed
[ "Hi! That makes sense. Sure, feel free to open a PR! Just a small suggestion: let's make `preserve_index` a parameter of `Dataset.from_pandas` (which we then pass to `InMemoryTable.from_pandas`) with `None` as a default value to not have this as a breaking change. " ]
2022-01-11T12:07:07
2022-01-12T16:11:27
2022-01-12T16:11:27
## Describe the bug Let's say that you want to create a Dataset object from pandas dataframe. Most likely you will write something like this: ``` import pandas as pd from datasets import Dataset df = pd.read_csv('some_dataset.csv') # Some DataFrame preprocessing code... dataset = Dataset.from_pandas(df) `...
Sorrow321
https://github.com/huggingface/datasets/issues/3563
null
false
1,098,341,351
3,562
Allow multiple task templates of the same type
closed
[]
2022-01-10T20:32:07
2022-01-11T14:16:47
2022-01-11T14:16:47
Add support for multiple task templates of the same type. Fixes (partially) #2520. CC: @lewtun
mariosasko
https://github.com/huggingface/datasets/pull/3562
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3562", "html_url": "https://github.com/huggingface/datasets/pull/3562", "diff_url": "https://github.com/huggingface/datasets/pull/3562.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3562.patch", "merged_at": "2022-01-11T14:16...
true
1,098,328,870
3,561
Cannot load โ€˜bookcorpusopenโ€™
closed
[ "The host of this copy of the dataset (https://the-eye.eu) is down and has been down for a good amount of time ([potentially months](https://www.reddit.com/r/Roms/comments/q82s15/theeye_downdied/))\r\n\r\nFinding this dataset is a little esoteric, as the original authors took down the official BookCorpus dataset so...
2022-01-10T20:17:18
2022-02-14T09:19:27
2022-02-14T09:18:47
## Describe the bug Cannot load 'bookcorpusopen' ## Steps to reproduce the bug ```python dataset = load_dataset('bookcorpusopen') ``` or ```python dataset = load_dataset('bookcorpusopen',script_version='master') ``` ## Actual results ConnectionError: Couldn't reach https://the-eye.eu/public/AI/pile_pre...
HUIYINXUE
https://github.com/huggingface/datasets/issues/3561
null
false
1,098,280,652
3,560
Run pyupgrade for Python 3.6+
closed
[ "Hi ! Thanks for the change :)\r\nCould it be possible to only run it for the code in `src/` ? We try to not change the code in the `datasets/` directory too often since it refreshes the users cache when they upgrade `datasets`.", "> Hi ! Thanks for the change :)\r\n> Could it be possible to only run it for the c...
2022-01-10T19:20:53
2022-01-31T13:38:49
2022-01-31T09:37:34
Run the command: ```bash pyupgrade $(find . -name "*.py" -type f) --py36-plus ``` Which mainly avoids unnecessary lists creations and also removes unnecessary code for Python 3.6+. It was originally part of #3489. Tip for reviewing faster: use the CLI (`git diff`) and scroll.
bryant1410
https://github.com/huggingface/datasets/pull/3560
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3560", "html_url": "https://github.com/huggingface/datasets/pull/3560", "diff_url": "https://github.com/huggingface/datasets/pull/3560.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3560.patch", "merged_at": "2022-01-31T09:37...
true
1,098,178,222
3,559
Fix `DuplicatedKeysError` and improve card in `tweet_qa`
closed
[]
2022-01-10T17:27:40
2022-01-12T15:13:58
2022-01-12T15:13:57
Fix #3555
mariosasko
https://github.com/huggingface/datasets/pull/3559
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3559", "html_url": "https://github.com/huggingface/datasets/pull/3559", "diff_url": "https://github.com/huggingface/datasets/pull/3559.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3559.patch", "merged_at": "2022-01-12T15:13...
true
1,098,025,866
3,558
Integrate Milvus (pymilvus) library
open
[ "Hi @mariosasko๏ผŒJust search randomly and I found this issue~ I'm the tech lead of Milvus and we are looking forward to integrate milvus together with huggingface datasets.\r\n\r\nAny suggestion on how we could start?\r\n", "Feel free to assign to me and we probably need some guide on it", "@mariosasko any updat...
2022-01-10T15:20:29
2022-03-05T12:28:36
null
Milvus is a popular open-source vector database. We should add a new vector index to support this project.
mariosasko
https://github.com/huggingface/datasets/issues/3558
null
false
1,097,946,034
3,557
Fix bug in `ImageClassifcation` task template
closed
[ "The CI failures are unrelated to the changes in this PR.", "> The CI failures are unrelated to the changes in this PR.\r\n\r\nIt seems that some of the failures are due to the tests on the dataset cards (e.g. CIFAR, MNIST, FASHION_MNIST). Perhaps it's worth addressing those in this PR to avoid confusing downstre...
2022-01-10T14:09:59
2022-01-11T15:47:52
2022-01-11T15:47:52
Fixes a bug in the `ImageClassification` task template which requires specifying class labels twice in dataset scripts. Additionally, this PR refactors the API around the classification task templates for cleaner `labels` handling. CC: @lewtun @nateraw
mariosasko
https://github.com/huggingface/datasets/pull/3557
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3557", "html_url": "https://github.com/huggingface/datasets/pull/3557", "diff_url": "https://github.com/huggingface/datasets/pull/3557.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3557.patch", "merged_at": "2022-01-11T15:47...
true
1,097,907,724
3,556
Preserve encoding/decoding with features in `Iterable.map` call
closed
[]
2022-01-10T13:32:20
2022-01-18T19:54:08
2022-01-18T19:54:07
As described in https://github.com/huggingface/datasets/issues/3505#issuecomment-1004755657, this PR uses a generator expression to encode/decode examples with `features` (which are set to None in `map`) before applying a map transform. Fix #3505
mariosasko
https://github.com/huggingface/datasets/pull/3556
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3556", "html_url": "https://github.com/huggingface/datasets/pull/3556", "diff_url": "https://github.com/huggingface/datasets/pull/3556.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3556.patch", "merged_at": "2022-01-18T19:54...
true
1,097,736,982
3,555
DuplicatedKeysError when loading tweet_qa dataset
closed
[ "Hi, we've just merged the PR with the fix. The fixed version of the dataset can be downloaded as follows:\r\n```python\r\nimport datasets\r\ndset = datasets.load_dataset(\"tweet_qa\", revision=\"master\")\r\n```" ]
2022-01-10T10:53:11
2022-01-12T15:17:33
2022-01-12T15:13:56
When loading the tweet_qa dataset with `load_dataset('tweet_qa')`, the following error occurs: `DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 2a167f9e016ba338e1813fed275a6a1e Keys should be unique and deterministic in nature ` Might be related to issues #2433 and #2333 - `datasets` ...
LeonieWeissweiler
https://github.com/huggingface/datasets/issues/3555
null
false
1,097,711,367
3,554
ImportError: cannot import name 'is_valid_waiter_error'
closed
[ "Hi! I can't reproduce this error in Colab, but I'm assuming you are using Amazon SageMaker Studio Notebooks (you mention the `conda_pytorch_p36` kernel), so maybe @philschmid knows more about what might be causing this issue? ", "Hey @mariosasko. Yes, I am using **Amazon SageMaker Studio Jupyter Labs**. However,...
2022-01-10T10:32:04
2022-02-14T09:35:57
2022-02-14T09:35:57
Based on [SO post](https://stackoverflow.com/q/70606147/17840900). I'm following along to this [Notebook][1], cell "**Loading the dataset**". Kernel: `conda_pytorch_p36`. I run: ``` ! pip install datasets transformers optimum[intel] ``` Output: ``` Requirement already satisfied: datasets in /home/ec2-u...
danielbellhv
https://github.com/huggingface/datasets/issues/3554
null
false
1,097,252,275
3,553
set_format("np") no longer works for Image data
closed
[ "A quick fix for now is doing this:\r\n\r\n```python\r\nX_train = np.stack(dataset[\"train\"][\"image\"])[..., None]", "This error also propagates to jax and is even trickier to fix, since `.with_format(type='jax')` will use numpy conversion internally (and fail). For a three line failure:\r\n\r\n```python\r\ndat...
2022-01-09T17:18:13
2022-10-14T12:03:55
2022-10-14T12:03:54
## Describe the bug `dataset.set_format("np")` no longer works for image data, previously you could load the MNIST like this: ```python dataset = load_dataset("mnist") dataset.set_format("np") X_train = dataset["train"]["image"][..., None] # <== No longer a numpy array ``` but now it doesn't work, `set_format(...
cgarciae
https://github.com/huggingface/datasets/issues/3553
null
false
1,096,985,204
3,552
Add the KMWP & DKTC dataset.
closed
[]
2022-01-08T17:12:14
2022-01-11T14:13:30
2022-01-11T14:13:30
Add the KMWP & DKTC dataset. Additional notes: - Both datasets will be released on January 10 through the GitHub link below. - https://github.com/tunib-ai/DKTC - https://github.com/tunib-ai/KMWP - So it doesn't work as a link at the moment, but the code will work soon (after it is released on January 10).
sooftware
https://github.com/huggingface/datasets/pull/3552
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3552", "html_url": "https://github.com/huggingface/datasets/pull/3552", "diff_url": "https://github.com/huggingface/datasets/pull/3552.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3552.patch", "merged_at": null }
true
1,096,561,111
3,551
Add more compression types for `to_json`
closed
[ "@lhoestq, I looked into how to compress with `zipfile` for which few methods exist, let me know which one looks good:\r\n1. create the file in normal `wb` mode and then zip it separately\r\n2. use `ZipFile.write_str` to write file into the archive. For this we'll need to change how we're writing files from `_write...
2022-01-07T18:25:02
2022-07-10T14:36:55
2022-02-21T15:58:15
This PR adds `bz2`, `xz`, and `zip` (WIP) for `to_json`. I also plan to add `infer` like how `pandas` does it
bhavitvyamalik
https://github.com/huggingface/datasets/pull/3551
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3551", "html_url": "https://github.com/huggingface/datasets/pull/3551", "diff_url": "https://github.com/huggingface/datasets/pull/3551.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3551.patch", "merged_at": "2022-02-21T15:58...
true
1,096,522,377
3,550
Bug in `openbookqa` dataset
closed
[ "Closed by:\r\n- #4259" ]
2022-01-07T17:32:57
2022-05-04T06:33:00
2022-05-04T06:32:19
## Describe the bug Dataset entries contains a typo. ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> obqa = load_dataset('openbookqa', 'main') >>> obqa['train'][0] ``` ## Expected results ```python {'id': '7-980', 'question_stem': 'The sun is responsible for', 'choices'...
lucadiliello
https://github.com/huggingface/datasets/issues/3550
null
false
1,096,426,996
3,549
Fix sem_eval_2018_task_1 download location
closed
[ "Hi ! Thanks for pushing this :)\r\n\r\nIt seems that you created this PR from an old version of `datasets` that didn't have the sem_eval_2018_task_1.py file.\r\n\r\nCan you try merging `master` into your branch ? Or re-create your PR from a branch that comes from a more recent version of `datasets` ?\r\n\r\nAnd so...
2022-01-07T15:37:52
2022-01-27T15:52:03
2022-01-27T15:52:03
This changes the download location of sem_eval_2018_task_1 files to include the test set labels as discussed in https://github.com/huggingface/datasets/issues/2745#issuecomment-954588500_ with @lhoestq.
maxpel
https://github.com/huggingface/datasets/pull/3549
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3549", "html_url": "https://github.com/huggingface/datasets/pull/3549", "diff_url": "https://github.com/huggingface/datasets/pull/3549.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3549.patch", "merged_at": null }
true
1,096,409,512
3,548
Specify the feature types of a dataset on the Hub without needing a dataset script
closed
[ "After looking into this, discovered that this is already supported if the `dataset_infos.json` file is configured correctly! Here is a working example: https://huggingface.co/datasets/abidlabs/test-audio-13\r\n\r\nThis should be probably be documented, though. " ]
2022-01-07T15:17:06
2022-01-20T14:48:38
2022-01-20T14:48:38
**Is your feature request related to a problem? Please describe.** Currently if I upload a CSV with paths to audio files, the column type is string instead of Audio. **Describe the solution you'd like** I'd like to be able to specify the types of the column, so that when loading the dataset I directly get the feat...
lhoestq
https://github.com/huggingface/datasets/issues/3548
null
false
1,096,405,515
3,547
Datasets created with `push_to_hub` can't be accessed in offline mode
closed
[ "Thanks for reporting. I think this can be fixed by improving the `CachedDatasetModuleFactory` and making it look into the `parquet` cache directory (datasets from push_to_hub are loaded with the parquet dataset builder). I'll look into it", "Hi, I'm having the same issue. Is there any update on this?", "We hav...
2022-01-07T15:12:25
2024-02-15T17:41:24
2023-12-21T15:13:12
## Describe the bug In offline mode, one can still access previously-cached datasets. This fails with datasets created with `push_to_hub`. ## Steps to reproduce the bug in Python: ``` import datasets mpwiki = datasets.load_dataset("teven/matched_passages_wikidata") ``` in bash: ``` export HF_DATASETS_OFFLIN...
TevenLeScao
https://github.com/huggingface/datasets/issues/3547
null
false
1,096,367,684
3,546
Remove print statements in datasets
closed
[ "The CI failures are unrelated to the changes." ]
2022-01-07T14:30:24
2022-01-07T18:09:16
2022-01-07T18:09:15
This is a second time I'm removing print statements in our datasets, so I've added a test to avoid these issues in the future.
mariosasko
https://github.com/huggingface/datasets/pull/3546
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3546", "html_url": "https://github.com/huggingface/datasets/pull/3546", "diff_url": "https://github.com/huggingface/datasets/pull/3546.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3546.patch", "merged_at": "2022-01-07T18:09...
true
1,096,189,889
3,545
fix: ๐Ÿ› pass token when retrieving the split names
closed
[ "Currently, it does not work with https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/blob/main/common_voice_7_0.py#L146 (which was the goal), because `dl_manager.download_config.use_auth_token` is ignored, and the authentication is required to be use `huggingface-cli login`.\r\nIn my use case (data...
2022-01-07T10:29:22
2022-01-10T10:51:47
2022-01-10T10:51:46
null
severo
https://github.com/huggingface/datasets/pull/3545
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3545", "html_url": "https://github.com/huggingface/datasets/pull/3545", "diff_url": "https://github.com/huggingface/datasets/pull/3545.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3545.patch", "merged_at": "2022-01-10T10:51...
true
1,095,784,681
3,544
Ability to split a dataset in multiple files.
open
[]
2022-01-06T23:02:25
2022-01-06T23:02:25
null
Hello, **Is your feature request related to a problem? Please describe.** My use case is that I have one writer that adds columns and multiple workers reading the same `Dataset`. Each worker should have access to columns added by the writer when they reload the dataset. I understand that we shouldn't overwrite...
Dref360
https://github.com/huggingface/datasets/issues/3544
null
false
1,095,226,438
3,543
Allow loading community metrics from the hub, just like datasets
closed
[ "Hi ! Thanks for your message :) This is a great idea indeed. We haven't started working on this yet though. For now I guess you can host your metric on the Hub (either with your model or your dataset) and use `hf_hub_download` to download it (docs [here](https://github.com/huggingface/huggingface_hub/blob/main/doc...
2022-01-06T11:26:26
2022-05-31T20:59:14
2022-05-31T20:53:37
**Is your feature request related to a problem? Please describe.** Currently, I can load a metric implemented by me by providing the local path to the file in `load_metric`. However, there is no option to do it with the metric uploaded to the hub. This means that if I want to allow other users to use it, they must d...
eladsegal
https://github.com/huggingface/datasets/issues/3543
null
false
1,095,088,485
3,542
Update the CC-100 dataset card
closed
[]
2022-01-06T08:35:18
2022-01-06T18:37:44
2022-01-06T18:37:44
* summary from the dataset homepage * more details about the data structure * this dataset does not contain annotations
aajanki
https://github.com/huggingface/datasets/pull/3542
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3542", "html_url": "https://github.com/huggingface/datasets/pull/3542", "diff_url": "https://github.com/huggingface/datasets/pull/3542.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3542.patch", "merged_at": "2022-01-06T18:37...
true
1,095,033,828
3,541
Support 7-zip compressed data files
open
[ "This should also resolve: https://github.com/huggingface/datasets/issues/3185." ]
2022-01-06T07:11:03
2022-07-19T10:18:30
null
**Is your feature request related to a problem? Please describe.** We should support 7-zip compressed data files: - [x] in `extract`: - #4672 - [ ] in `iter_archive`: for streaming mode both in streaming and non-streaming modes.
albertvillanova
https://github.com/huggingface/datasets/issues/3541
null
false
1,094,900,336
3,540
How to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset?
open
[]
2022-01-06T02:13:42
2022-01-06T02:17:39
null
Hi, I use torch.utils.data.Dataset to define my own data, but I need to use the 'map' function of datasets.arrow_dataset.Dataset later, so I hope to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset. Here is an example. ``` from torch.utils.data import Dataset from datasets.arrow_dataset import ...
CindyTing
https://github.com/huggingface/datasets/issues/3540
null
false
1,094,813,242
3,539
Research wording for nc licenses
closed
[ "The CI failure is about some missing tags or sections in the dataset cards, and is unrelated to the part about non commercial use of this PR. Merging" ]
2022-01-05T23:01:38
2022-01-06T18:58:20
2022-01-06T18:58:19
null
meg-huggingface
https://github.com/huggingface/datasets/pull/3539
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3539", "html_url": "https://github.com/huggingface/datasets/pull/3539", "diff_url": "https://github.com/huggingface/datasets/pull/3539.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3539.patch", "merged_at": "2022-01-06T18:58...
true
1,094,756,755
3,538
Readme usage update
closed
[]
2022-01-05T21:26:28
2022-01-05T23:34:25
2022-01-05T23:24:15
Noticing that the recent commit throws a lot of errors in the automatic checks. It looks to me that those errors are simply errors that were already there (metadata issues), unrelated to what I've just changed, but worth another look to make sure.
meg-huggingface
https://github.com/huggingface/datasets/pull/3538
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3538", "html_url": "https://github.com/huggingface/datasets/pull/3538", "diff_url": "https://github.com/huggingface/datasets/pull/3538.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3538.patch", "merged_at": "2022-01-05T23:24...
true
1,094,738,734
3,537
added PII statements and license links to data cards
closed
[]
2022-01-05T20:59:21
2022-01-05T22:02:37
2022-01-05T22:02:37
Updates for the following datacards: multilingual_librispeech openslr speech commands superb timit_asr vctk
mcmillanmajora
https://github.com/huggingface/datasets/pull/3537
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3537", "html_url": "https://github.com/huggingface/datasets/pull/3537", "diff_url": "https://github.com/huggingface/datasets/pull/3537.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3537.patch", "merged_at": "2022-01-05T22:02...
true
1,094,645,771
3,536
update `pretty_name` for all datasets
closed
[ "Pushed the lastest changes!" ]
2022-01-05T18:45:05
2022-07-10T14:36:54
2022-01-12T22:59:45
This PR updates `pretty_name` for all datasets. Previous PR #3498 had done this for only first 200 datasets
bhavitvyamalik
https://github.com/huggingface/datasets/pull/3536
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3536", "html_url": "https://github.com/huggingface/datasets/pull/3536", "diff_url": "https://github.com/huggingface/datasets/pull/3536.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3536.patch", "merged_at": "2022-01-12T22:59...
true
1,094,633,214
3,535
Add SVHN dataset
closed
[]
2022-01-05T18:29:09
2022-01-12T14:14:35
2022-01-12T14:14:35
Add the SVHN dataset. Additional notes: * compared to the TFDS implementation, exposes additional the "full numbers" config * adds the streaming support for `os.path.splitext` and `scipy.io.loadmat` * adds `h5py` to the requirements list for the dummy data test
mariosasko
https://github.com/huggingface/datasets/pull/3535
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3535", "html_url": "https://github.com/huggingface/datasets/pull/3535", "diff_url": "https://github.com/huggingface/datasets/pull/3535.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3535.patch", "merged_at": "2022-01-12T14:14...
true
1,094,352,449
3,534
Update wiki_dpr README.md
closed
[]
2022-01-05T13:29:44
2022-02-17T13:45:56
2022-01-05T14:16:51
Some infos of wiki_dpr were missing as noted in https://github.com/huggingface/datasets/issues/3510, I added them and updated the tags and the examples Close #3510.
lhoestq
https://github.com/huggingface/datasets/pull/3534
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3534", "html_url": "https://github.com/huggingface/datasets/pull/3534", "diff_url": "https://github.com/huggingface/datasets/pull/3534.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3534.patch", "merged_at": "2022-01-05T14:16...
true
1,094,156,147
3,533
Task search function on hub not working correctly
open
[ "known issue due to https://github.com/huggingface/datasets/pull/2362 (and [internal](https://github.com/huggingface/moon-landing/issues/946)) , will be solved soon", "hmm actually i have no recollection of why I said that", "Because it has dots in some YAML keys, it can't be parsed and indexed by the back-end"...
2022-01-05T09:36:30
2022-05-12T14:45:57
null
When I want to look at all datasets of the category: `speech-processing` *i.e.* https://huggingface.co/datasets?task_categories=task_categories:speech-processing&sort=downloads , then the following dataset doesn't show up for some reason: - https://huggingface.co/datasets/speech_commands even thought it's task t...
patrickvonplaten
https://github.com/huggingface/datasets/issues/3533
null
false
1,094,035,066
3,532
Give clearer instructions to add the YAML tags
closed
[ "this is great, maybe just put all of it in one line?\r\n\r\n> TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging" ]
2022-01-05T06:47:52
2022-01-17T15:54:37
2022-01-17T15:54:36
Fix #3531. CC: @julien-c @VictorSanh
albertvillanova
https://github.com/huggingface/datasets/pull/3532
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3532", "html_url": "https://github.com/huggingface/datasets/pull/3532", "diff_url": "https://github.com/huggingface/datasets/pull/3532.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3532.patch", "merged_at": "2022-01-17T15:54...
true
1,094,033,280
3,531
Give clearer instructions to add the YAML tags
closed
[]
2022-01-05T06:44:20
2022-01-17T15:54:36
2022-01-17T15:54:36
## Describe the bug As reported by @julien-c, many community datasets contain the line `YAML tags:` at the top of the YAML section in the header of the README file. See e.g.: https://huggingface.co/datasets/bigscience/P3/commit/a03bea08cf4d58f268b469593069af6aeb15de32 Maybe we should give clearer instruction/hints...
albertvillanova
https://github.com/huggingface/datasets/issues/3531
null
false
1,093,894,732
3,530
Update README.md
closed
[]
2022-01-05T01:32:07
2022-01-05T12:50:51
2022-01-05T12:50:50
Removing reference to "Common Voice" in Personal and Sensitive Information section. Adding link to license. Correct license type in metadata.
meg-huggingface
https://github.com/huggingface/datasets/pull/3530
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3530", "html_url": "https://github.com/huggingface/datasets/pull/3530", "diff_url": "https://github.com/huggingface/datasets/pull/3530.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3530.patch", "merged_at": "2022-01-05T12:50...
true
1,093,846,356
3,529
Update README.md
closed
[]
2022-01-04T23:52:47
2022-01-05T12:50:15
2022-01-05T12:50:14
Updating licensing information & personal and sensitive information.
meg-huggingface
https://github.com/huggingface/datasets/pull/3529
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3529", "html_url": "https://github.com/huggingface/datasets/pull/3529", "diff_url": "https://github.com/huggingface/datasets/pull/3529.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3529.patch", "merged_at": "2022-01-05T12:50...
true
1,093,844,616
3,528
Update README.md
closed
[]
2022-01-04T23:48:11
2022-01-05T12:49:41
2022-01-05T12:49:40
Updating license with appropriate capitalization & a link. Updating Personal and Sensitive Information to address PII concern.
meg-huggingface
https://github.com/huggingface/datasets/pull/3528
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3528", "html_url": "https://github.com/huggingface/datasets/pull/3528", "diff_url": "https://github.com/huggingface/datasets/pull/3528.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3528.patch", "merged_at": "2022-01-05T12:49...
true
1,093,840,707
3,527
Update README.md
closed
[]
2022-01-04T23:39:41
2022-01-05T00:23:50
2022-01-05T00:23:50
Adding licensing information.
meg-huggingface
https://github.com/huggingface/datasets/pull/3527
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3527", "html_url": "https://github.com/huggingface/datasets/pull/3527", "diff_url": "https://github.com/huggingface/datasets/pull/3527.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3527.patch", "merged_at": "2022-01-05T00:23...
true
1,093,833,446
3,526
Update license to bookcorpus dataset card
closed
[ "The smashwords ToS apply for this dataset, we did the same for https://github.com/huggingface/datasets/pull/3525", "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-01-04T23:25:23
2022-09-30T10:23:38
2022-09-30T10:21:20
Not entirely sure, following the links here, but it seems the relevant license is at https://github.com/soskek/bookcorpus/blob/master/LICENSE
meg-huggingface
https://github.com/huggingface/datasets/pull/3526
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3526", "html_url": "https://github.com/huggingface/datasets/pull/3526", "diff_url": "https://github.com/huggingface/datasets/pull/3526.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3526.patch", "merged_at": "2022-09-30T10:21...
true
1,093,831,268
3,525
Adding license information for Openbookcorpus
closed
[ "The MIT license seems to be for the crawling code, no ? Then maybe we can also redirect users to the [terms of smashwords.com](https://www.smashwords.com/about/tos) regarding copyrights, in particular the paragraph 10 for end-users. In particular it seems that end users can download and use the content \"for their...
2022-01-04T23:20:36
2022-04-20T09:54:30
2022-04-20T09:48:10
Not entirely sure, following the links here, but it seems the relevant license is at https://github.com/soskek/bookcorpus/blob/master/LICENSE
meg-huggingface
https://github.com/huggingface/datasets/pull/3525
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3525", "html_url": "https://github.com/huggingface/datasets/pull/3525", "diff_url": "https://github.com/huggingface/datasets/pull/3525.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3525.patch", "merged_at": "2022-04-20T09:48...
true
1,093,826,723
3,524
Adding link to license.
closed
[]
2022-01-04T23:11:48
2022-01-05T12:31:38
2022-01-05T12:31:37
null
meg-huggingface
https://github.com/huggingface/datasets/pull/3524
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3524", "html_url": "https://github.com/huggingface/datasets/pull/3524", "diff_url": "https://github.com/huggingface/datasets/pull/3524.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3524.patch", "merged_at": "2022-01-05T12:31...
true
1,093,819,227
3,523
Added links to licensing and PII message in vctk dataset
closed
[]
2022-01-04T22:56:58
2022-01-06T19:33:50
2022-01-06T19:33:50
null
mcmillanmajora
https://github.com/huggingface/datasets/pull/3523
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3523", "html_url": "https://github.com/huggingface/datasets/pull/3523", "diff_url": "https://github.com/huggingface/datasets/pull/3523.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3523.patch", "merged_at": "2022-01-06T19:33...
true
1,093,807,586
3,522
wmt19 is broken (zh-en)
closed
[ "This issue is not reproducible." ]
2022-01-04T22:33:45
2022-05-06T16:27:37
2022-05-06T16:27:37
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("wmt19", 'zh-en') ``` ## Expected results The dataset should download. ## Actual results `ConnectionError: Couldn't reach ftp://cwmt-wm...
AjayP13
https://github.com/huggingface/datasets/issues/3522
null
false
1,093,797,947
3,521
Vivos license update
closed
[]
2022-01-04T22:17:47
2022-01-04T22:18:16
2022-01-04T22:18:16
Updated the license information with the link to the license text
mcmillanmajora
https://github.com/huggingface/datasets/pull/3521
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3521", "html_url": "https://github.com/huggingface/datasets/pull/3521", "diff_url": "https://github.com/huggingface/datasets/pull/3521.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3521.patch", "merged_at": null }
true
1,093,747,753
3,520
Audio datacard update - first pass
closed
[ "I'm not sure that we want to change the tags at the top of the cards by hand. Those are used to create the tags in the hub. Although looking at all the tags now, we might want to normalize the current tags again (hyphens or no, \".0\" or no). Maybe we could add a binary tag for public domain or not?", "> \r\n\r\...
2022-01-04T20:58:25
2022-01-05T12:30:21
2022-01-05T12:30:20
Filling out data card "Personal and Sensitive Information" for speech datasets to note PII concerns
meg-huggingface
https://github.com/huggingface/datasets/pull/3520
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3520", "html_url": "https://github.com/huggingface/datasets/pull/3520", "diff_url": "https://github.com/huggingface/datasets/pull/3520.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3520.patch", "merged_at": "2022-01-05T12:30...
true
1,093,655,205
3,519
CC100: Using HTTPS for the data source URL fixes load_dataset()
closed
[]
2022-01-04T18:45:54
2022-01-05T17:28:34
2022-01-05T17:28:34
Without this change the following script (with any lang parameter) consistently fails. After changing to the HTTPS URL, the script works as expected. ```python from datasets import load_dataset dataset = load_dataset("cc100", lang="en") ``` This is the error produced by the previous script: ```sh Using cus...
aajanki
https://github.com/huggingface/datasets/pull/3519
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3519", "html_url": "https://github.com/huggingface/datasets/pull/3519", "diff_url": "https://github.com/huggingface/datasets/pull/3519.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3519.patch", "merged_at": "2022-01-05T17:28...
true
1,093,063,455
3,518
Add PubMed Central Open Access dataset
closed
[ "In the framework of BigScience:\r\n- bigscience-workshop/data_tooling#121\r\n\r\nI have created this dataset as a community dataset: https://huggingface.co/datasets/albertvillanova/pmc_open_access\r\n\r\nHowever, I was wondering that it may be more appropriate to move it under an org namespace: `pubmed_central` or...
2022-01-04T06:54:35
2022-01-17T15:25:57
2022-01-17T15:25:57
## Adding a Dataset - **Name:** PubMed Central Open Access - **Description:** The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under license terms that allow reuse. - **Paper:** *link to the dataset paper if available* - **Data:** https://www.ncbi.nlm....
albertvillanova
https://github.com/huggingface/datasets/issues/3518
null
false
1,092,726,651
3,517
Add CPPE-5 dataset
closed
[ "Thanks so much, @mariosasko and @lhoestq , much appreciated!" ]
2022-01-03T18:31:20
2022-01-19T02:23:37
2022-01-05T18:53:02
Adds the recently released CPPE-5 dataset.
mariosasko
https://github.com/huggingface/datasets/pull/3517
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3517", "html_url": "https://github.com/huggingface/datasets/pull/3517", "diff_url": "https://github.com/huggingface/datasets/pull/3517.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3517.patch", "merged_at": "2022-01-05T18:53...
true
1,092,657,738
3,516
dataset `asset` - change to raw.githubusercontent.com URLs
closed
[]
2022-01-03T16:43:57
2022-01-03T17:39:02
2022-01-03T17:39:01
Changed the URLs to the ones it was automatically re-directing. Before, the download was failing
VictorSanh
https://github.com/huggingface/datasets/pull/3516
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3516", "html_url": "https://github.com/huggingface/datasets/pull/3516", "diff_url": "https://github.com/huggingface/datasets/pull/3516.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3516.patch", "merged_at": "2022-01-03T17:39...
true
1,092,624,695
3,515
`ExpectedMoreDownloadedFiles` for `evidence_infer_treatment`
closed
[ "Thanks for reporting @VictorSanh.\r\n\r\nI'm looking at it... " ]
2022-01-03T15:58:38
2022-02-14T13:21:43
2022-02-14T13:21:43
## Describe the bug I am trying to load a dataset called `evidence_infer_treatment`. The first subset (`1.1`) works fine but the second returns an error (`2.0`). It downloads a file but crashes during the checksums. ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> load_dataset("e...
VictorSanh
https://github.com/huggingface/datasets/issues/3515
null
false
1,092,606,383
3,514
Fix to_tf_dataset references in docs
closed
[ "The code snippet in [this section](https://huggingface.co/docs/datasets/master/use_dataset.html?highlight=to_tf_dataset#tensorflow) is missing an import (`DataCollatorWithPadding`) and doesn't initialize the TF model before the `model.fit` call." ]
2022-01-03T15:31:39
2022-01-05T18:52:48
2022-01-05T18:52:48
Fix the `to_tf_dataset` references in the docs. The currently failing example of usage will be fixed by #3338.
mariosasko
https://github.com/huggingface/datasets/pull/3514
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3514", "html_url": "https://github.com/huggingface/datasets/pull/3514", "diff_url": "https://github.com/huggingface/datasets/pull/3514.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3514.patch", "merged_at": "2022-01-05T18:52...
true
1,092,569,802
3,513
Add desc parameter to filter
closed
[]
2022-01-03T14:44:18
2022-01-05T18:31:25
2022-01-05T18:31:25
Fix #3317
mariosasko
https://github.com/huggingface/datasets/pull/3513
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3513", "html_url": "https://github.com/huggingface/datasets/pull/3513", "diff_url": "https://github.com/huggingface/datasets/pull/3513.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3513.patch", "merged_at": "2022-01-05T18:31...
true
1,092,359,973
3,512
No Data format found
closed
[ "Hi, which dataset is giving you an error?" ]
2022-01-03T09:41:11
2022-01-17T13:26:05
2022-01-17T13:26:05
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
shazzad47
https://github.com/huggingface/datasets/issues/3512
null
false
1,092,170,411
3,511
Dataset
closed
[ "Can you reopen with the correct dataset name (if relevant)?\r\n\r\nThanks", "The dataset viewer was down tonight. It works again." ]
2022-01-03T02:03:23
2022-01-03T08:41:26
2022-01-03T08:23:07
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
MIKURI0114
https://github.com/huggingface/datasets/issues/3511
null
false
1,091,997,004
3,510
`wiki_dpr` details for Open Domain Question Answering tasks
closed
[ "Hi ! According to the DPR paper, the wikipedia dump is the one from Dec. 20, 2018.\r\nEach instance contains a paragraph of at most 100 word, as well as the title of the wikipedia page it comes from and the DPR embedding (a 768-d vector).", "Closed by:\r\n- #3534" ]
2022-01-02T11:04:01
2022-02-17T13:46:20
2022-02-17T13:46:20
Hey guys! Thanks for creating the `wiki_dpr` dataset! I am currently trying to use the dataset for context retrieval using DPR on NQ questions and need details about what each of the files and data instances mean, which version of the Wikipedia dump it uses, etc. Please respond at your earliest convenience regard...
pk1130
https://github.com/huggingface/datasets/issues/3510
null
false
1,091,214,808
3,507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
closed
[ "IMO, the data streaming test is good enough of a test that the dataset works correctly (assuming that we can more or less ensure that if streaming works then the non-streaming case will also work), so that for datasets that have a working dataset preview, we can remove the dummy data IMO. On the other hand, it see...
2021-12-30T17:04:25
2022-11-04T15:31:38
2022-11-04T15:31:37
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the d...
albertvillanova
https://github.com/huggingface/datasets/issues/3507
null
false
1,091,166,595
3,506
Allows DatasetDict.filter to have batching option
closed
[]
2021-12-30T15:22:22
2022-01-04T10:24:28
2022-01-04T10:24:27
- Related to: #3244 - Fixes: #3503 We extends `.filter( ... batched: bool)` support to DatasetDict.
thomasw21
https://github.com/huggingface/datasets/pull/3506
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3506", "html_url": "https://github.com/huggingface/datasets/pull/3506", "diff_url": "https://github.com/huggingface/datasets/pull/3506.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3506.patch", "merged_at": "2022-01-04T10:24...
true
1,091,150,820
3,505
cast_column function not working with map function in streaming mode for Audio features
closed
[ "Hi! This is probably due to the fact that `IterableDataset.map` sets `features` to `None` before mapping examples. We can fix the issue by passing the old `features` dict to the map generator and performing encoding/decoding there (before calling the map transform function)." ]
2021-12-30T14:52:01
2022-01-18T19:54:07
2022-01-18T19:54:07
## Describe the bug I am trying to use Audio class for loading audio features using custom dataset. I am able to cast 'audio' feature into 'Audio' format with cast_column function. On using map function, I am not getting 'Audio' casted feature but getting path of audio file only. I am getting features of 'audio' of s...
ashu5644
https://github.com/huggingface/datasets/issues/3505
null
false
1,090,682,230
3,504
Unable to download PUBMED_title_abstracts_2019_baseline.jsonl.zst
closed
[ "Hi @ToddMorrill, thanks for reporting.\r\n\r\nThree weeks ago I contacted the team who created the Pile dataset to report this issue with their data host server: https://the-eye.eu\r\n\r\nThey told me that unfortunately, the-eye was heavily affected by the recent tornado catastrophe in the US. They hope to have th...
2021-12-29T18:23:20
2024-05-20T09:44:59
2022-02-17T15:04:25
## Describe the bug I am unable to download the PubMed dataset from the link provided in the [Hugging Face Course (Chapter 5 Section 4)](https://huggingface.co/course/chapter5/4?fw=pt). https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst ## Steps to reproduce ...
ToddMorrill
https://github.com/huggingface/datasets/issues/3504
null
false
1,090,472,735
3,503
Batched in filter throws error
closed
[]
2021-12-29T12:01:04
2022-01-04T10:24:27
2022-01-04T10:24:27
I hope this is really a bug, I could not find it among the open issues ## Describe the bug using `batched=False` in DataSet.filter throws error ```python TypeError: filter() got an unexpected keyword argument 'batched' ``` but in the docs it is lister as an argument. ## Steps to reproduce the bug ```python ...
gpucce
https://github.com/huggingface/datasets/issues/3503
null
false
1,090,438,558
3,502
Add QuALITY
closed
[ "Thanks for your contribution, @jaketae. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if y...
2021-12-29T10:58:46
2022-10-03T09:36:14
2022-10-03T09:36:14
Fixes #3441.
jaketae
https://github.com/huggingface/datasets/pull/3502
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3502", "html_url": "https://github.com/huggingface/datasets/pull/3502", "diff_url": "https://github.com/huggingface/datasets/pull/3502.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3502.patch", "merged_at": null }
true
1,090,413,758
3,501
Update pib dataset card
closed
[]
2021-12-29T10:14:40
2021-12-29T11:13:21
2021-12-29T11:13:21
Related to #3496
albertvillanova
https://github.com/huggingface/datasets/pull/3501
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3501", "html_url": "https://github.com/huggingface/datasets/pull/3501", "diff_url": "https://github.com/huggingface/datasets/pull/3501.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3501.patch", "merged_at": "2021-12-29T11:13...
true
1,090,406,133
3,500
Docs: Add VCTK dataset description
closed
[]
2021-12-29T10:02:05
2022-01-04T10:46:02
2022-01-04T10:25:09
This PR is a very minor followup to #1837, with only docs changes (single comment string).
jaketae
https://github.com/huggingface/datasets/pull/3500
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3500", "html_url": "https://github.com/huggingface/datasets/pull/3500", "diff_url": "https://github.com/huggingface/datasets/pull/3500.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3500.patch", "merged_at": "2022-01-04T10:25...
true
1,090,132,618
3,499
Adjusting chunk size for streaming datasets
closed
[ "Hi ! Data streaming uses `fsspec` to read the data files progressively. IIRC the block size for buffering is 5MiB by default. So every time you finish iterating over a block, it downloads the next one. You can still try to increase the `fsspec` block size for buffering if it can help. To do so you just need to inc...
2021-12-28T21:17:53
2022-05-06T16:29:05
2022-05-06T16:29:05
**Is your feature request related to a problem? Please describe.** I want to use mc4 which I cannot save locally, so I stream it. However, I want to process the entire dataset and filter some documents from it. With the current chunk size of around 1000 documents (right?) I hit a performance bottleneck because of the ...
JoelNiklaus
https://github.com/huggingface/datasets/issues/3499
null
false
1,090,096,332
3,498
update `pretty_name` for first 200 datasets
closed
[]
2021-12-28T19:50:07
2022-07-10T14:36:53
2022-01-05T16:38:21
I made a script some time back to fetch `pretty_names` from `papers_with_code` dataset along with some other rules incase that dataset wasn't available on `papers_with_code`. Updating them in the `README` of `datasets`. Took only the first 200 datasets into consideration and after some eyeballing, most of them were loo...
bhavitvyamalik
https://github.com/huggingface/datasets/pull/3498
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3498", "html_url": "https://github.com/huggingface/datasets/pull/3498", "diff_url": "https://github.com/huggingface/datasets/pull/3498.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3498.patch", "merged_at": "2022-01-05T16:38...
true
1,090,050,148
3,497
Changing sampling rate in audio dataset and subsequently mapping with `num_proc > 1` leads to weird bug
closed
[ "Same error occures when using max samples with https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py", "I'm seeing this too, when using preprocessing_num_workers with \r\nhttps://github.com/huggingface/transformers/blob/master/examples/pytor...
2021-12-28T18:03:49
2022-01-21T13:22:27
2022-01-21T13:22:27
Running: ```python from datasets import load_dataset, DatasetDict import datasets from transformers import AutoFeatureExtractor raw_datasets = DatasetDict() raw_datasets["train"] = load_dataset("common_voice", "ab", split="train") feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2ve...
patrickvonplaten
https://github.com/huggingface/datasets/issues/3497
null
false
1,089,989,155
3,496
Update version of pib dataset and make it streamable
closed
[ "It seems like there is still an error: `Message: 'TarContainedFile' object has no attribute 'readable'`\r\n\r\nhttps://huggingface.co/datasets/pib/viewer", "@severo I was wondering about that...\r\n\r\nIt works fine when I run it in streaming mode in my terminal:\r\n```python\r\nIn [3]: from datasets impor...
2021-12-28T16:01:55
2022-01-03T14:42:28
2021-12-29T08:42:57
This PR: - Updates version of pib dataset: from 0.0.0 to 1.3.0 - Makes the dataset streamable Fix #3491. CC: @severo
albertvillanova
https://github.com/huggingface/datasets/pull/3496
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3496", "html_url": "https://github.com/huggingface/datasets/pull/3496", "diff_url": "https://github.com/huggingface/datasets/pull/3496.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3496.patch", "merged_at": "2021-12-29T08:42...
true
1,089,983,632
3,495
Add VoxLingua107
open
[]
2021-12-28T15:51:43
2021-12-28T15:51:43
null
## Adding a Dataset - **Name:** VoxLingua107 - **Description:** VoxLingua107 is a speech dataset for training spoken language identification models. - **Paper:** https://arxiv.org/abs/2011.12998 - **Data:** http://bark.phon.ioc.ee/voxlingua107/ - **Motivation:** 107 languages, totaling 6628 hours for the train sp...
jaketae
https://github.com/huggingface/datasets/issues/3495
null
false
1,089,983,103
3,494
Clone full repo to detect new tags when mirroring datasets on the Hub
closed
[ "Good catch !!", "The CI fail is unrelated to this PR and fixed on master, merging :)" ]
2021-12-28T15:50:47
2021-12-28T16:07:21
2021-12-28T16:07:20
The new releases of `datasets` were not detected because the shallow clone in the CI wasn't getting the git tags. By cloning the full repository we can properly detect a new release, and tag all the dataset repositories accordingly cc @SBrandeis
lhoestq
https://github.com/huggingface/datasets/pull/3494
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3494", "html_url": "https://github.com/huggingface/datasets/pull/3494", "diff_url": "https://github.com/huggingface/datasets/pull/3494.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3494.patch", "merged_at": "2021-12-28T16:07...
true
1,089,967,286
3,493
Fix VCTK encoding
closed
[]
2021-12-28T15:23:36
2021-12-28T15:48:18
2021-12-28T15:48:17
utf-8 encoding was missing in the VCTK dataset builder added in #3351
lhoestq
https://github.com/huggingface/datasets/pull/3493
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3493", "html_url": "https://github.com/huggingface/datasets/pull/3493", "diff_url": "https://github.com/huggingface/datasets/pull/3493.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3493.patch", "merged_at": "2021-12-28T15:48...
true
1,089,952,943
3,492
Add `gzip` for `to_json`
closed
[]
2021-12-28T15:01:11
2022-07-10T14:36:52
2022-01-05T13:03:36
(Partially) closes #3480. I have added `gzip` compression for `to_json`. I realised we can run into this compression problem with `to_csv` as well. `IOHandler` can be used for `to_csv` too. Please let me know if any changes are required.
bhavitvyamalik
https://github.com/huggingface/datasets/pull/3492
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3492", "html_url": "https://github.com/huggingface/datasets/pull/3492", "diff_url": "https://github.com/huggingface/datasets/pull/3492.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3492.patch", "merged_at": "2022-01-05T13:03...
true
1,089,918,018
3,491
Update version of pib dataset
closed
[]
2021-12-28T14:03:58
2021-12-29T08:42:57
2021-12-29T08:42:57
On the Hub we have v0, while there exists v1.3. Related to bigscience-workshop/data_tooling#130
albertvillanova
https://github.com/huggingface/datasets/issues/3491
null
false
1,089,730,181
3,490
Does datasets support load text from HDFS?
open
[ "Hi ! `datasets` currently supports reading local files or files over HTTP. We may add support for other filesystems (cloud storages, hdfs...) at one point though :)" ]
2021-12-28T08:56:02
2022-02-14T14:00:51
null
The raw text data is stored on HDFS due to the dataset's size is too large to store on my develop machine, so I wander does datasets support read data from hdfs?
dancingpipi
https://github.com/huggingface/datasets/issues/3490
null
false
1,089,401,926
3,489
Avoid unnecessary list creations
open
[ "@bryant1410 Thanks for working on this. Could you please split the PR into 4 or 5 smaller PRs (ideally one PR for each bullet point from your description) because it's not practical to review such a large PR, especially if the changes are not interrelated?" ]
2021-12-27T18:20:56
2022-07-06T15:19:49
null
Like in `join([... for s in ...])`. Also changed other things that I saw: * Use a `with` statement for many `open` that missed them, so the files don't remain open. * Remove unused variables. * Many HTTP links converted into HTTPS (verified). * Remove unnecessary "r" mode arg in `open` (double-checked it was actual...
bryant1410
https://github.com/huggingface/datasets/pull/3489
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3489", "html_url": "https://github.com/huggingface/datasets/pull/3489", "diff_url": "https://github.com/huggingface/datasets/pull/3489.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3489.patch", "merged_at": null }
true
1,089,345,653
3,488
URL query parameters are set as path in the compression hop for fsspec
open
[ "I think the test passes because it simply ignore what's after `gzip://`.\r\n\r\nThe returned urlpath is expected to look like `gzip://filename::url`, and the filename is currently considered to be what's after the final `/`, hence the result.\r\n\r\nWe can decide to change this and simply have `gzip://::url`, this...
2021-12-27T16:29:00
2022-01-05T15:15:25
null
## Describe the bug There is an ssue with `StreamingDownloadManager._extract`. I don't know how the test `test_streaming_gg_drive_gzipped` passes: For ```python TEST_GG_DRIVE_GZIPPED_URL = "https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz" urlpath = StreamingDownloadManager()....
albertvillanova
https://github.com/huggingface/datasets/issues/3488
null
false
1,089,209,031
3,487
Update ADD_NEW_DATASET.md
closed
[]
2021-12-27T12:24:51
2021-12-27T15:00:45
2021-12-27T15:00:45
fixed make style prompt for Windows Terminal
apergo-ai
https://github.com/huggingface/datasets/pull/3487
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3487", "html_url": "https://github.com/huggingface/datasets/pull/3487", "diff_url": "https://github.com/huggingface/datasets/pull/3487.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3487.patch", "merged_at": "2021-12-27T15:00...
true
1,089,171,551
3,486
Fix weird spacing in ManualDownloadError message
closed
[]
2021-12-27T11:20:36
2021-12-28T09:03:26
2021-12-28T09:00:28
`textwrap.dedent` works based on the spaces at the beginning. Before this change, there wasn't any space.
bryant1410
https://github.com/huggingface/datasets/pull/3486
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3486", "html_url": "https://github.com/huggingface/datasets/pull/3486", "diff_url": "https://github.com/huggingface/datasets/pull/3486.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3486.patch", "merged_at": "2021-12-28T09:00...
true
1,089,027,581
3,485
skip columns which cannot set to specific format when set_format
closed
[ "You can add columns that you wish to set into `torch` format using `dataset.set_format(\"torch\", ['id', 'abc'])` so that input batch of the transform only contains those columns", "Sorry, I miss `output_all_columns` args and thought after `dataset.set_format(\"torch\", columns=columns)` I can only get specific ...
2021-12-27T07:19:55
2021-12-27T09:07:07
2021-12-27T09:07:07
**Is your feature request related to a problem? Please describe.** When using `dataset.set_format("torch")`, I must make sure every columns in datasets can convert to `torch`, however, sometimes I want to keep some string columns. **Describe the solution you'd like** skip columns which cannot set to specific forma...
tshu-w
https://github.com/huggingface/datasets/issues/3485
null
false
1,088,910,402
3,484
make shape verification to use ArrayXD instead of nested lists for map
open
[ "Hi! \r\n\r\nYes, this makes sense for numeric values, but first I have to finish https://github.com/huggingface/datasets/pull/3336 because currently ArrayXD only allows the first dimension to be dynamic." ]
2021-12-27T02:16:02
2022-01-05T13:54:03
null
As describe in https://github.com/huggingface/datasets/issues/2005#issuecomment-793716753 and mentioned by @mariosasko in [image feature example](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c#scrollTo=ow3XHDvf2I0B&line=1&uniqifier=1), IMO make shape verifcaiton to use ArrayXD instead of nest...
tshu-w
https://github.com/huggingface/datasets/issues/3484
null
false
1,088,784,157
3,483
Remove unused phony rule from Makefile
closed
[ "The CI failure is unrelated to this PR and fixed on master, merging !" ]
2021-12-26T14:37:13
2022-01-05T19:44:56
2022-01-05T16:34:12
null
bryant1410
https://github.com/huggingface/datasets/pull/3483
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3483", "html_url": "https://github.com/huggingface/datasets/pull/3483", "diff_url": "https://github.com/huggingface/datasets/pull/3483.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3483.patch", "merged_at": "2022-01-05T16:34...
true
1,088,317,921
3,482
Fix duplicate keys in NewsQA
closed
[ "Flaky tests?", "Thanks for your contribution, @bryant1410.\r\n\r\nI think the fix of the duplicate key in this PR was superseded by:\r\n- #3696\r\n\r\nI'm closing this because we are moving all dataset scripts from GitHub to the Hugging Face Hub." ]
2021-12-24T11:01:59
2022-09-23T12:57:10
2022-09-23T12:57:10
* Fix duplicate keys in NewsQA when loading from CSV files. * Fix s/narqa/newsqa/ in the download manually error message. * Make the download manually error message show nicely when printed. Otherwise, is hard to read due to spacing issues. * Fix the format of the license text. * Reformat the code to make it simple...
bryant1410
https://github.com/huggingface/datasets/pull/3482
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3482", "html_url": "https://github.com/huggingface/datasets/pull/3482", "diff_url": "https://github.com/huggingface/datasets/pull/3482.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3482.patch", "merged_at": null }
true
1,088,308,343
3,481
Fix overriding of filesystem info
closed
[]
2021-12-24T10:42:31
2021-12-24T11:08:59
2021-12-24T11:08:59
Previously, `BaseCompressedFileFileSystem.info` was overridden and transformed from function to dict. This generated a bug for filesystem methods that use `self.info()`, like e.g. `fs.isfile()`. This PR: - Adds tests for `fs.isfile` (that use `fs.info`). - Fixes custom `BaseCompressedFileFileSystem.info` by rem...
albertvillanova
https://github.com/huggingface/datasets/pull/3481
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3481", "html_url": "https://github.com/huggingface/datasets/pull/3481", "diff_url": "https://github.com/huggingface/datasets/pull/3481.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3481.patch", "merged_at": "2021-12-24T11:08...
true
1,088,267,110
3,480
the compression format requested when saving a dataset in json format is not respected
closed
[ "Thanks for reporting @SaulLu.\r\n\r\nAt first sight I think the problem is caused because `pandas` only takes into account the `compression` parameter if called with a non-null file path or buffer. And in our implementation, we call pandas `to_json` with `None` `path_or_buf`.\r\n\r\nWe should fix this:\r\n- either...
2021-12-24T09:23:51
2022-01-05T13:03:35
2022-01-05T13:03:35
## Describe the bug In the documentation of the `to_json` method, it is stated in the parameters that > **to_json_kwargs โ€“ Parameters to pass to pandasโ€™s pandas.DataFrame.to_json. however when we pass for example `compression="gzip"`, the saved file is not compressed. Would you also have expected compression t...
SaulLu
https://github.com/huggingface/datasets/issues/3480
null
false
1,088,232,880
3,479
Dataset preview is not available (I think for all Hugging Face datasets)
closed
[ "You're right, we have an issue today with the datasets preview. We're investigating.", "It should be fixed now. Thanks for reporting.", "Down again. ", "Fixed for good." ]
2021-12-24T08:18:48
2021-12-24T14:27:46
2021-12-24T14:27:46
## Dataset viewer issue for '*french_book_reviews*' **Link:** https://huggingface.co/datasets/Abirate/french_book_reviews **short description of the issue** For my dataset, the dataset preview is no longer functional (it used to work: The dataset had been added the day before and it was fine...) And, after lo...
Abirate
https://github.com/huggingface/datasets/issues/3479
null
false
1,087,860,180
3,478
Extend support for streaming datasets that use os.walk
closed
[ "Nice. I'll update the dataset viewer once merged, and test on these four datasets" ]
2021-12-23T16:42:55
2021-12-24T10:50:20
2021-12-24T10:50:19
This PR extends the support in streaming mode for datasets that use `os.walk`, by patching that function. This PR adds support for streaming mode to datasets: 1. autshumato 1. code_x_glue_cd_code_to_text 1. code_x_glue_tc_nl_code_search_adv 1. nchlt CC: @severo
albertvillanova
https://github.com/huggingface/datasets/pull/3478
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3478", "html_url": "https://github.com/huggingface/datasets/pull/3478", "diff_url": "https://github.com/huggingface/datasets/pull/3478.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3478.patch", "merged_at": "2021-12-24T10:50...
true
1,087,850,253
3,477
Use `iter_files` instead of `str(Path(...)` in image dataset
closed
[ "`iter_archive` is about to support ZIP archives. I think we should use this no ?\r\n\r\nsee #3347 https://github.com/huggingface/datasets/pull/3379", "I was interested in the support for isfile/dir in remote.\r\n\r\nAnyway, `iter_files` will be available for community users.", "I'm not a big fan of having two ...
2021-12-23T16:26:55
2021-12-28T15:15:02
2021-12-28T15:15:02
Use `iter_files` in the `beans` and the `cats_vs_dogs` dataset scripts as suggested by @albertvillanova. Additional changes: * Fix `iter_files` in `MockDownloadManager` (see this [CI error](https://app.circleci.com/pipelines/github/huggingface/datasets/9247/workflows/2657ff8a-b531-4fd9-a9fc-6541a72e8d83/jobs/57028)...
mariosasko
https://github.com/huggingface/datasets/pull/3477
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3477", "html_url": "https://github.com/huggingface/datasets/pull/3477", "diff_url": "https://github.com/huggingface/datasets/pull/3477.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3477.patch", "merged_at": "2021-12-28T15:15...
true
1,087,622,872
3,476
Extend support for streaming datasets that use ET.parse
closed
[]
2021-12-23T11:18:46
2021-12-23T15:34:30
2021-12-23T15:34:30
This PR extends the support in streaming mode for datasets that use `ET.parse`, by patching the function. This PR adds support for streaming mode to datasets: 1. ami 1. assin 1. assin2 1. counter 1. enriched_web_nlg 1. europarl_bilingual 1. hyperpartisan_news_detection 1. polsum 1. qa4mre 1. quail 1. ted_...
albertvillanova
https://github.com/huggingface/datasets/pull/3476
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3476", "html_url": "https://github.com/huggingface/datasets/pull/3476", "diff_url": "https://github.com/huggingface/datasets/pull/3476.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3476.patch", "merged_at": "2021-12-23T15:34...
true
1,087,352,041
3,475
The rotten_tomatoes dataset of movie reviews contains some reviews in Spanish
open
[ "Hi @puzzler10, thanks for reporting.\r\n\r\nPlease note this dataset is not hosted on Hugging Face Hub. See: \r\nhttps://github.com/huggingface/datasets/blob/c8f914473b041833fd47178fa4373cdcb56ac522/datasets/rotten_tomatoes/rotten_tomatoes.py#L42\r\n\r\nIf there are issues with the source data of a dataset, you sh...
2021-12-23T03:56:43
2021-12-24T00:23:03
null
## Describe the bug See title. I don't think this is intentional and they probably should be removed. If they stay the dataset description should be at least updated to make it clear to the user. ## Steps to reproduce the bug Go to the [dataset viewer](https://huggingface.co/datasets/viewer/?dataset=rotten_tomato...
puzzler10
https://github.com/huggingface/datasets/issues/3475
null
false