id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
753,722,324
928
Add the Multilingual Amazon Reviews Corpus
closed
[]
2020-11-30T18:58:06
2020-12-01T16:04:30
2020-12-01T16:04:27
- **Name:** Multilingual Amazon Reviews Corpus* (`amazon_reviews_multi`) - **Description:** A collection of Amazon reviews in English, Japanese, German, French, Spanish and Chinese. - **Paper:** https://arxiv.org/abs/2010.02573 ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` us...
joeddav
https://github.com/huggingface/datasets/pull/928
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/928", "html_url": "https://github.com/huggingface/datasets/pull/928", "diff_url": "https://github.com/huggingface/datasets/pull/928.diff", "patch_url": "https://github.com/huggingface/datasets/pull/928.patch", "merged_at": "2020-12-01T16:04:27"...
true
753,679,020
927
Hello
closed
[]
2020-11-30T17:50:05
2020-11-30T17:50:30
2020-11-30T17:50:30
k125-ak
https://github.com/huggingface/datasets/issues/927
null
false
753,676,069
926
add inquisitive
closed
[]
2020-11-30T17:45:22
2020-12-02T13:45:22
2020-12-02T13:40:13
Adding inquisitive qg dataset More info: https://github.com/wjko2/INQUISITIVE
patil-suraj
https://github.com/huggingface/datasets/pull/926
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/926", "html_url": "https://github.com/huggingface/datasets/pull/926", "diff_url": "https://github.com/huggingface/datasets/pull/926.diff", "patch_url": "https://github.com/huggingface/datasets/pull/926.patch", "merged_at": "2020-12-02T13:40:13"...
true
753,672,661
925
Add Turku NLP Corpus for Finnish NER
closed
[]
2020-11-30T17:40:19
2020-12-03T14:07:11
2020-12-03T14:07:10
abhishekkrthakur
https://github.com/huggingface/datasets/pull/925
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/925", "html_url": "https://github.com/huggingface/datasets/pull/925", "diff_url": "https://github.com/huggingface/datasets/pull/925.diff", "patch_url": "https://github.com/huggingface/datasets/pull/925.patch", "merged_at": "2020-12-03T14:07:10"...
true
753,631,951
924
Add DART
closed
[]
2020-11-30T16:42:37
2020-12-02T03:13:42
2020-12-02T03:13:41
- **Name:** *DART* - **Description:** *DART is a large dataset for open-domain structured data record to text generation.* - **Paper:** *https://arxiv.org/abs/2007.02871* - **Data:** *https://github.com/Yale-LILY/dart#leaderboard* ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py...
lhoestq
https://github.com/huggingface/datasets/pull/924
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/924", "html_url": "https://github.com/huggingface/datasets/pull/924", "diff_url": "https://github.com/huggingface/datasets/pull/924.diff", "patch_url": "https://github.com/huggingface/datasets/pull/924.patch", "merged_at": "2020-12-02T03:13:41"...
true
753,569,220
923
Add CC-100 dataset
closed
[]
2020-11-30T15:23:22
2021-04-20T13:34:17
2021-04-20T13:34:17
Add CC-100. Close #773
albertvillanova
https://github.com/huggingface/datasets/pull/923
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/923", "html_url": "https://github.com/huggingface/datasets/pull/923", "diff_url": "https://github.com/huggingface/datasets/pull/923.diff", "patch_url": "https://github.com/huggingface/datasets/pull/923.patch", "merged_at": null }
true
753,559,130
922
Add XOR QA Dataset
closed
[]
2020-11-30T15:10:54
2020-12-02T03:12:21
2020-12-02T03:12:21
Added XOR Question Answering Dataset. The link to the dataset can be found [here](https://nlp.cs.washington.edu/xorqa/) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
sumanthd17
https://github.com/huggingface/datasets/pull/922
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/922", "html_url": "https://github.com/huggingface/datasets/pull/922", "diff_url": "https://github.com/huggingface/datasets/pull/922.diff", "patch_url": "https://github.com/huggingface/datasets/pull/922.patch", "merged_at": "2020-12-02T03:12:21"...
true
753,445,747
920
add dream dataset
closed
[]
2020-11-30T12:40:14
2020-12-03T16:45:12
2020-12-02T15:39:12
Adding Dream: a Dataset and for Dialogue-Based Reading Comprehension More details: https://dataset.org/dream/ https://github.com/nlpdata/dream
patil-suraj
https://github.com/huggingface/datasets/pull/920
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/920", "html_url": "https://github.com/huggingface/datasets/pull/920", "diff_url": "https://github.com/huggingface/datasets/pull/920.diff", "patch_url": "https://github.com/huggingface/datasets/pull/920.patch", "merged_at": "2020-12-02T15:39:12"...
true
753,434,472
919
wrong length with datasets
closed
[]
2020-11-30T12:23:39
2020-11-30T12:37:27
2020-11-30T12:37:26
Hi I have a MRPC dataset which I convert it to seq2seq format, then this is of this format: `Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10) ` I feed it to a dataloader: ``` dataloader = DataLoader( train_dataset, ...
rabeehk
https://github.com/huggingface/datasets/issues/919
null
false
753,397,440
918
Add conll2002
closed
[]
2020-11-30T11:29:35
2020-11-30T18:34:30
2020-11-30T18:34:29
Adding the Conll2002 dataset for NER. More info here : https://www.clips.uantwerpen.be/conll2002/ner/ ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` ...
lhoestq
https://github.com/huggingface/datasets/pull/918
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/918", "html_url": "https://github.com/huggingface/datasets/pull/918", "diff_url": "https://github.com/huggingface/datasets/pull/918.diff", "patch_url": "https://github.com/huggingface/datasets/pull/918.patch", "merged_at": "2020-11-30T18:34:29"...
true
753,391,591
917
Addition of Concode Dataset
closed
[]
2020-11-30T11:20:59
2020-12-29T02:55:36
2020-12-29T02:55:36
##Overview Concode Dataset contains pairs of Nl Queries and the corresponding Code.(Contextual Code Generation) Reference Links Paper Link = https://arxiv.org/pdf/1904.09086.pdf Github Link =https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/text-to-code
reshinthadithyan
https://github.com/huggingface/datasets/pull/917
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/917", "html_url": "https://github.com/huggingface/datasets/pull/917", "diff_url": "https://github.com/huggingface/datasets/pull/917.diff", "patch_url": "https://github.com/huggingface/datasets/pull/917.patch", "merged_at": null }
true
753,376,643
916
Add Swedish NER Corpus
closed
[]
2020-11-30T10:59:51
2020-12-02T03:10:50
2020-12-02T03:10:49
abhishekkrthakur
https://github.com/huggingface/datasets/pull/916
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/916", "html_url": "https://github.com/huggingface/datasets/pull/916", "diff_url": "https://github.com/huggingface/datasets/pull/916.diff", "patch_url": "https://github.com/huggingface/datasets/pull/916.patch", "merged_at": "2020-12-02T03:10:49"...
true
753,118,481
915
Shall we change the hashing to encoding to reduce potential replicated cache files?
open
[]
2020-11-30T03:50:46
2020-12-24T05:11:49
null
Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the finge...
zhuzilin
https://github.com/huggingface/datasets/issues/915
null
false
752,956,106
914
Add list_github_datasets api for retrieving dataset name list in github repo
closed
[]
2020-11-29T16:42:15
2020-12-02T07:21:16
2020-12-02T07:21:16
Thank you for your great effort on unifying data processing for NLP! This pr is trying to add a new api `list_github_datasets` in the `inspect` module. The reason for it is that the current `list_datasets` api need to access https://huggingface.co/api/datasets to get a large json. However, this connection can be rea...
zhuzilin
https://github.com/huggingface/datasets/pull/914
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/914", "html_url": "https://github.com/huggingface/datasets/pull/914", "diff_url": "https://github.com/huggingface/datasets/pull/914.diff", "patch_url": "https://github.com/huggingface/datasets/pull/914.patch", "merged_at": null }
true
752,892,020
913
My new dataset PEC
closed
[]
2020-11-29T11:10:37
2020-12-01T10:41:53
2020-12-01T10:41:53
A new dataset PEC published in EMNLP 2020.
zhongpeixiang
https://github.com/huggingface/datasets/pull/913
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/913", "html_url": "https://github.com/huggingface/datasets/pull/913", "diff_url": "https://github.com/huggingface/datasets/pull/913.diff", "patch_url": "https://github.com/huggingface/datasets/pull/913.patch", "merged_at": null }
true
752,806,215
911
datasets module not found
closed
[]
2020-11-29T01:24:15
2020-11-29T14:33:09
2020-11-29T14:33:09
Currently, running `from datasets import load_dataset` will throw a `ModuleNotFoundError: No module named 'datasets'` error.
sbassam
https://github.com/huggingface/datasets/issues/911
null
false
752,772,723
910
Grindr meeting app web.Grindr
closed
[]
2020-11-28T21:36:23
2020-11-29T10:11:51
2020-11-29T10:11:51
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
jackin34
https://github.com/huggingface/datasets/issues/910
null
false
752,508,299
909
Add FiNER dataset
closed
[]
2020-11-27T23:54:20
2020-12-07T16:56:23
2020-12-07T16:56:23
Hi, this PR adds "A Finnish News Corpus for Named Entity Recognition" as new `finer` dataset. The dataset is described in [this paper](https://arxiv.org/abs/1908.04212). The data is publicly available in [this GitHub](https://github.com/mpsilfve/finer-data). Notice: they provide two testsets. The additional te...
stefan-it
https://github.com/huggingface/datasets/pull/909
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/909", "html_url": "https://github.com/huggingface/datasets/pull/909", "diff_url": "https://github.com/huggingface/datasets/pull/909.diff", "patch_url": "https://github.com/huggingface/datasets/pull/909.patch", "merged_at": "2020-12-07T16:56:23"...
true
752,428,652
908
Add dependency on black for tests
closed
[]
2020-11-27T19:12:48
2020-11-27T21:46:53
2020-11-27T21:46:52
Add package 'black' as an installation requirement for tests.
albertvillanova
https://github.com/huggingface/datasets/pull/908
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/908", "html_url": "https://github.com/huggingface/datasets/pull/908", "diff_url": "https://github.com/huggingface/datasets/pull/908.diff", "patch_url": "https://github.com/huggingface/datasets/pull/908.patch", "merged_at": null }
true
752,422,351
907
Remove os.path.join from all URLs
closed
[]
2020-11-27T18:55:30
2020-11-29T22:48:20
2020-11-29T22:48:19
Remove `os.path.join` from all URLs in dataset scripts.
albertvillanova
https://github.com/huggingface/datasets/pull/907
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/907", "html_url": "https://github.com/huggingface/datasets/pull/907", "diff_url": "https://github.com/huggingface/datasets/pull/907.diff", "patch_url": "https://github.com/huggingface/datasets/pull/907.patch", "merged_at": "2020-11-29T22:48:19"...
true
752,403,395
906
Fix url with backslash in windows for blimp and pg19
closed
[]
2020-11-27T17:59:11
2020-11-27T18:19:56
2020-11-27T18:19:56
Following #903 I also fixed blimp and pg19 which were using the `os.path.join` to create urls cc @albertvillanova
lhoestq
https://github.com/huggingface/datasets/pull/906
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/906", "html_url": "https://github.com/huggingface/datasets/pull/906", "diff_url": "https://github.com/huggingface/datasets/pull/906.diff", "patch_url": "https://github.com/huggingface/datasets/pull/906.patch", "merged_at": "2020-11-27T18:19:55"...
true
752,395,456
905
Disallow backslash in urls
closed
[]
2020-11-27T17:38:28
2020-11-29T22:48:37
2020-11-29T22:48:36
Following #903 @albertvillanova noticed that there are sometimes bad usage of `os.path.join` in datasets scripts to create URLS. However this should be avoided since it doesn't work on windows. I'm suggesting a test to make sure we that all the urls don't have backslashes in them in the datasets scripts. The tests ...
lhoestq
https://github.com/huggingface/datasets/pull/905
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/905", "html_url": "https://github.com/huggingface/datasets/pull/905", "diff_url": "https://github.com/huggingface/datasets/pull/905.diff", "patch_url": "https://github.com/huggingface/datasets/pull/905.patch", "merged_at": "2020-11-29T22:48:36"...
true
752,372,743
904
Very detailed step-by-step on how to add a dataset
closed
[]
2020-11-27T16:45:21
2020-11-30T09:56:27
2020-11-30T09:56:26
Add very detailed step-by-step instructions to add a new dataset to the library.
thomwolf
https://github.com/huggingface/datasets/pull/904
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/904", "html_url": "https://github.com/huggingface/datasets/pull/904", "diff_url": "https://github.com/huggingface/datasets/pull/904.diff", "patch_url": "https://github.com/huggingface/datasets/pull/904.patch", "merged_at": "2020-11-30T09:56:26"...
true
752,360,614
903
Fix URL with backslash in Windows
closed
[]
2020-11-27T16:26:24
2020-11-27T18:04:46
2020-11-27T18:04:46
In Windows, `os.path.join` generates URLs containing backslashes, when the first "path" does not end with a slash. In general, `os.path.join` should be avoided to generate URLs.
albertvillanova
https://github.com/huggingface/datasets/pull/903
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/903", "html_url": "https://github.com/huggingface/datasets/pull/903", "diff_url": "https://github.com/huggingface/datasets/pull/903.diff", "patch_url": "https://github.com/huggingface/datasets/pull/903.patch", "merged_at": "2020-11-27T18:04:46"...
true
752,345,739
902
Follow cache_dir parameter to gcs downloader
closed
[]
2020-11-27T16:02:06
2020-11-29T22:48:54
2020-11-29T22:48:53
As noticed in #900 the cache_dir parameter was not followed to the downloader in the case of an already processed dataset hosted on our google storage (one of them is natural questions). Fix #900
lhoestq
https://github.com/huggingface/datasets/pull/902
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/902", "html_url": "https://github.com/huggingface/datasets/pull/902", "diff_url": "https://github.com/huggingface/datasets/pull/902.diff", "patch_url": "https://github.com/huggingface/datasets/pull/902.patch", "merged_at": "2020-11-29T22:48:53"...
true
752,233,851
901
Addition of Nl2Bash Dataset
closed
[]
2020-11-27T12:53:55
2020-11-29T18:09:25
2020-11-29T18:08:51
## Overview The NL2Bash data contains over 10,000 instances of linux shell commands and their corresponding natural language descriptions provided by experts, from the Tellina system. The dataset features 100+ commonly used shell utilities. ## Footnotes The following dataset marks the first ML on source code related...
reshinthadithyan
https://github.com/huggingface/datasets/pull/901
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/901", "html_url": "https://github.com/huggingface/datasets/pull/901", "diff_url": "https://github.com/huggingface/datasets/pull/901.diff", "patch_url": "https://github.com/huggingface/datasets/pull/901.patch", "merged_at": null }
true
752,214,066
900
datasets.load_dataset() custom chaching directory bug
closed
[]
2020-11-27T12:18:53
2020-11-29T22:48:53
2020-11-29T22:48:53
Hello, I'm having issue with loading a dataset with a custom `cache_dir`. Despite specifying the output dir, it is still downloaded to `~/.cache`. ## Environment info - `datasets` version: 1.1.3 - Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1 - Python version: 3.7.3 ## The code I'm running: ```p...
SapirWeissbuch
https://github.com/huggingface/datasets/issues/900
null
false
752,191,227
899
Allow arrow based builder in auto dummy data generation
closed
[]
2020-11-27T11:39:38
2020-11-27T13:30:09
2020-11-27T13:30:08
Following #898 I added support for arrow based builder for the auto dummy data generator
lhoestq
https://github.com/huggingface/datasets/pull/899
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/899", "html_url": "https://github.com/huggingface/datasets/pull/899", "diff_url": "https://github.com/huggingface/datasets/pull/899.diff", "patch_url": "https://github.com/huggingface/datasets/pull/899.patch", "merged_at": "2020-11-27T13:30:08"...
true
752,148,284
898
Adding SQA dataset
closed
[]
2020-11-27T10:29:18
2020-12-15T12:54:40
2020-12-15T12:54:19
As discussed in #880 Seems like automatic dummy-data generation doesn't work if the builder is a `ArrowBasedBuilder`, do you think you could take a look @lhoestq ?
thomwolf
https://github.com/huggingface/datasets/pull/898
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/898", "html_url": "https://github.com/huggingface/datasets/pull/898", "diff_url": "https://github.com/huggingface/datasets/pull/898.diff", "patch_url": "https://github.com/huggingface/datasets/pull/898.patch", "merged_at": null }
true
752,100,256
897
Dataset viewer issues
closed
[]
2020-11-27T09:14:34
2021-10-31T09:12:01
2021-10-31T09:12:01
I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though: - the URL is still under `nlp`, perhaps an alias for `datasets` can be made - when I remove a **feature** (and the feature list is empty), I get an error. T...
BramVanroy
https://github.com/huggingface/datasets/issues/897
null
false
751,834,265
896
Add template and documentation for dataset card
closed
[]
2020-11-26T21:30:25
2020-11-28T01:10:15
2020-11-28T01:10:15
This PR adds a template for dataset cards, as well as a guide to filling out the template and a completed example for the ELI5 dataset, building on the work of @mcmillanmajora New pull requests adding datasets should now have a README.md file which serves both to hold the tags we will have to index the datasets and...
yjernite
https://github.com/huggingface/datasets/pull/896
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/896", "html_url": "https://github.com/huggingface/datasets/pull/896", "diff_url": "https://github.com/huggingface/datasets/pull/896.diff", "patch_url": "https://github.com/huggingface/datasets/pull/896.patch", "merged_at": "2020-11-28T01:10:14"...
true
751,782,295
895
Better messages regarding split naming
closed
[]
2020-11-26T18:55:46
2020-11-27T13:31:00
2020-11-27T13:30:59
I made explicit the error message when a bad split name is used. Also I wanted to allow the `-` symbol for split names but actually this symbol is used to name the arrow files `{dataset_name}-{dataset_split}.arrow` so we should probably keep it this way, i.e. not allowing the `-` symbol in split names. Moreover in t...
lhoestq
https://github.com/huggingface/datasets/pull/895
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/895", "html_url": "https://github.com/huggingface/datasets/pull/895", "diff_url": "https://github.com/huggingface/datasets/pull/895.diff", "patch_url": "https://github.com/huggingface/datasets/pull/895.patch", "merged_at": "2020-11-27T13:30:59"...
true
751,734,905
894
Allow several tags sets
closed
[]
2020-11-26T17:04:13
2021-05-05T18:24:17
2020-11-27T20:15:49
Hi ! Currently we have three dataset cards : snli, cnn_dailymail and allocine. For each one of those datasets a set of tag is defined. The set of tags contains fields like `multilinguality`, `task_ids`, `licenses` etc. For certain datasets like `glue` for example, there exist several configurations: `sst2`, `mnl...
lhoestq
https://github.com/huggingface/datasets/pull/894
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/894", "html_url": "https://github.com/huggingface/datasets/pull/894", "diff_url": "https://github.com/huggingface/datasets/pull/894.diff", "patch_url": "https://github.com/huggingface/datasets/pull/894.patch", "merged_at": null }
true
751,703,696
893
add metrec: arabic poetry dataset
closed
[]
2020-11-26T16:10:16
2020-12-01T16:24:55
2020-12-01T15:15:07
zaidalyafeai
https://github.com/huggingface/datasets/pull/893
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/893", "html_url": "https://github.com/huggingface/datasets/pull/893", "diff_url": "https://github.com/huggingface/datasets/pull/893.diff", "patch_url": "https://github.com/huggingface/datasets/pull/893.patch", "merged_at": "2020-12-01T15:15:07"...
true
751,658,262
892
Add a few datasets of reference in the documentation
closed
[]
2020-11-26T15:02:39
2020-11-27T18:08:45
2020-11-27T18:08:44
I started making a small list of various datasets of reference in the documentation. Since many datasets share a lot in common I think it's good to have a list of datasets scripts to get some inspiration from. Let me know what you think, and if you have ideas of other datasets that we may add to this list, please l...
lhoestq
https://github.com/huggingface/datasets/pull/892
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/892", "html_url": "https://github.com/huggingface/datasets/pull/892", "diff_url": "https://github.com/huggingface/datasets/pull/892.diff", "patch_url": "https://github.com/huggingface/datasets/pull/892.patch", "merged_at": "2020-11-27T18:08:44"...
true
751,576,869
891
gitignore .python-version
closed
[]
2020-11-26T13:05:58
2020-11-26T13:28:27
2020-11-26T13:28:26
ignore `.python-version` added by `pyenv`
patil-suraj
https://github.com/huggingface/datasets/pull/891
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/891", "html_url": "https://github.com/huggingface/datasets/pull/891", "diff_url": "https://github.com/huggingface/datasets/pull/891.diff", "patch_url": "https://github.com/huggingface/datasets/pull/891.patch", "merged_at": "2020-11-26T13:28:26"...
true
751,534,050
890
Add LER
closed
[]
2020-11-26T11:58:23
2020-12-01T13:33:35
2020-12-01T13:26:16
JoelNiklaus
https://github.com/huggingface/datasets/pull/890
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/890", "html_url": "https://github.com/huggingface/datasets/pull/890", "diff_url": "https://github.com/huggingface/datasets/pull/890.diff", "patch_url": "https://github.com/huggingface/datasets/pull/890.patch", "merged_at": null }
true
751,115,691
889
Optional per-dataset default config name
closed
[]
2020-11-25T21:02:30
2020-11-30T17:27:33
2020-11-30T17:27:27
This PR adds a `DEFAULT_CONFIG_NAME` class attribute to `DatasetBuilder`. This allows a dataset to have a specified default config name when a dataset has more than one config but the user does not specify it. For example, after defining `DEFAULT_CONFIG_NAME = "combined"` in PolyglotNER, a user can now do the following...
joeddav
https://github.com/huggingface/datasets/pull/889
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/889", "html_url": "https://github.com/huggingface/datasets/pull/889", "diff_url": "https://github.com/huggingface/datasets/pull/889.diff", "patch_url": "https://github.com/huggingface/datasets/pull/889.patch", "merged_at": "2020-11-30T17:27:27"...
true
750,944,422
888
Nested lists are zipped unexpectedly
closed
[]
2020-11-25T16:07:46
2020-11-25T17:30:39
2020-11-25T17:30:39
I might misunderstand something, but I expect that if I define: ```python "top": datasets.features.Sequence({ "middle": datasets.features.Sequence({ "bottom": datasets.Value("int32") }) }) ``` And I then create an example: ```python yield 1, { "top": [{ "middle": [ {"bottom": 1}, ...
AmitMY
https://github.com/huggingface/datasets/issues/888
null
false
750,868,831
887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
open
[]
2020-11-25T14:32:21
2021-09-09T17:03:40
null
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and ...
AmitMY
https://github.com/huggingface/datasets/issues/887
null
false
750,829,314
886
Fix wikipedia custom config
closed
[]
2020-11-25T13:44:12
2021-06-25T05:24:16
2020-11-25T15:42:13
It should be possible to use the wikipedia dataset with any `language` and `date`. However it was not working as noticed in #784 . Indeed the custom wikipedia configurations were not enabled for some reason. I fixed that and was able to run ```python from datasets import load_dataset load_dataset("./datasets/wi...
lhoestq
https://github.com/huggingface/datasets/pull/886
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/886", "html_url": "https://github.com/huggingface/datasets/pull/886", "diff_url": "https://github.com/huggingface/datasets/pull/886.diff", "patch_url": "https://github.com/huggingface/datasets/pull/886.patch", "merged_at": "2020-11-25T15:42:13"...
true
750,789,052
885
Very slow cold-start
closed
[]
2020-11-25T12:47:58
2021-01-13T11:31:25
2021-01-13T11:31:25
Hi, I expect when importing `datasets` that nothing major happens in the background, and so the import should be insignificant. When I load a metric, or a dataset, its fine that it takes time. The following ranges from 3 to 9 seconds: ``` python -m timeit -n 1 -r 1 'from datasets import load_dataset' ``` edi...
AmitMY
https://github.com/huggingface/datasets/issues/885
null
false
749,862,034
884
Auto generate dummy data
closed
[]
2020-11-24T16:31:34
2020-11-26T14:18:47
2020-11-26T14:18:46
When adding a new dataset to the library, dummy data creation can take some time. To make things easier I added a command line tool that automatically generates dummy data when possible. The tool only supports certain data files types: txt, csv, tsv, jsonl, json and xml. Here are some examples: ``` python data...
lhoestq
https://github.com/huggingface/datasets/pull/884
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/884", "html_url": "https://github.com/huggingface/datasets/pull/884", "diff_url": "https://github.com/huggingface/datasets/pull/884.diff", "patch_url": "https://github.com/huggingface/datasets/pull/884.patch", "merged_at": "2020-11-26T14:18:46"...
true
749,750,801
883
Downloading/caching only a part of a datasets' dataset.
open
[]
2020-11-24T14:25:18
2020-11-27T13:51:55
null
Hi, I want to use the validation data *only* (of natural question). I don't want to have the whole dataset cached in my machine, just the dev set. Is this possible? I can't find a way to do it in the docs. Thank you, Sapir
SapirWeissbuch
https://github.com/huggingface/datasets/issues/883
null
false
749,662,188
882
Update README.md
closed
[]
2020-11-24T12:23:52
2021-01-29T10:41:07
2021-01-29T10:41:07
"no label" is "-" in the original dataset but "-1" in Huggingface distribution.
vaibhavad
https://github.com/huggingface/datasets/pull/882
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/882", "html_url": "https://github.com/huggingface/datasets/pull/882", "diff_url": "https://github.com/huggingface/datasets/pull/882.diff", "patch_url": "https://github.com/huggingface/datasets/pull/882.patch", "merged_at": "2021-01-29T10:41:06"...
true
749,548,107
881
Use GCP download url instead of tensorflow custom download for boolq
closed
[]
2020-11-24T09:47:11
2020-11-24T10:12:34
2020-11-24T10:12:33
BoolQ is a dataset that used tf.io.gfile.copy to download the file from a GCP bucket. It prevented the dataset to be downloaded twice because of a FileAlreadyExistsError. Even though the error could be fixed by providing `overwrite=True` to the tf.io.gfile.copy call, I changed the script to use GCP download urls and ...
lhoestq
https://github.com/huggingface/datasets/pull/881
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/881", "html_url": "https://github.com/huggingface/datasets/pull/881", "diff_url": "https://github.com/huggingface/datasets/pull/881.diff", "patch_url": "https://github.com/huggingface/datasets/pull/881.patch", "merged_at": "2020-11-24T10:12:33"...
true
748,949,606
880
Add SQA
closed
[]
2020-11-23T16:31:55
2020-12-23T13:58:24
2020-12-23T13:58:23
## Adding a Dataset - **Name:** SQA (Sequential Question Answering) by Microsoft. - **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total. - **Paper:** https://www.microsoft.com/en-us/r...
NielsRogge
https://github.com/huggingface/datasets/issues/880
null
false
748,848,847
879
boolq does not load
closed
[]
2020-11-23T14:28:28
2022-10-05T12:23:32
2022-10-05T12:23:32
Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset d...
rabeehk
https://github.com/huggingface/datasets/issues/879
null
false
748,621,981
878
Loading Data From S3 Path in Sagemaker
open
[]
2020-11-23T09:17:22
2020-12-23T09:53:08
null
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files...
mahesh1amour
https://github.com/huggingface/datasets/issues/878
null
false
748,234,438
877
DataLoader(datasets) become more and more slowly within iterations
closed
[]
2020-11-22T12:41:10
2024-11-22T03:02:53
2020-11-29T15:45:12
Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly! ``` dataset = load_from_disk(dataset_path) # around 21,000,000 lines lineloader = tqdm(DataLoader(dataset, batch_size=1)) for idx, line in enumerate(lineloader): # do some thing for each line ``` In the begining, th...
shexuan
https://github.com/huggingface/datasets/issues/877
null
false
748,195,104
876
imdb dataset cannot be loaded
closed
[]
2020-11-22T08:24:43
2024-05-10T03:03:29
2020-12-24T17:38:47
Hi I am trying to load the imdb train dataset `dataset = datasets.load_dataset("imdb", split="train")` getting following errors, thanks for your help ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/...
rabeehk
https://github.com/huggingface/datasets/issues/876
null
false
748,194,311
875
bug in boolq dataset loading
closed
[]
2020-11-22T08:18:34
2020-11-24T10:12:33
2020-11-24T10:12:33
Hi I am trying to load boolq dataset: ``` import datasets datasets.load_dataset("boolq") ``` I am getting the following errors, thanks for your help ``` >>> import datasets 2020-11-22 09:16:30.070332: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda...
rabeehk
https://github.com/huggingface/datasets/issues/875
null
false
748,193,140
874
trec dataset unavailable
closed
[]
2020-11-22T08:09:36
2020-11-27T13:56:42
2020-11-27T13:56:42
Hi when I try to load the trec dataset I am getting these errors, thanks for your help `datasets.load_dataset("trec", split="train") ` ``` File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ...
rabeehk
https://github.com/huggingface/datasets/issues/874
null
false
747,959,523
873
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
closed
[]
2020-11-21T06:30:45
2023-08-03T12:07:03
2020-11-22T12:18:05
``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() ...
vishal-burman
https://github.com/huggingface/datasets/issues/873
null
false
747,653,697
872
Add IndicGLUE dataset and Metrics
closed
[]
2020-11-20T17:09:34
2020-11-25T17:01:11
2020-11-25T15:26:07
Added IndicGLUE benchmark for evaluating models on 11 Indian Languages. The descriptions of the tasks and the corresponding paper can be found [here](https://indicnlp.ai4bharat.org/indic-glue/) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
sumanthd17
https://github.com/huggingface/datasets/pull/872
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/872", "html_url": "https://github.com/huggingface/datasets/pull/872", "diff_url": "https://github.com/huggingface/datasets/pull/872.diff", "patch_url": "https://github.com/huggingface/datasets/pull/872.patch", "merged_at": "2020-11-25T15:26:07"...
true
747,470,136
871
terminate called after throwing an instance of 'google::protobuf::FatalException'
closed
[]
2020-11-20T12:56:24
2020-12-12T21:16:32
2020-12-12T21:16:32
Hi I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks 100%|█████████████████████████████████████████████████████████████████████████████████████████████...
rabeehk
https://github.com/huggingface/datasets/issues/871
null
false
747,021,996
870
[Feature Request] Add optional parameter in text loading script to preserve linebreaks
closed
[]
2020-11-19T23:51:31
2022-06-01T15:25:53
2022-06-01T15:25:52
I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data. I recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great. But the first time I processed all of ...
jncasey
https://github.com/huggingface/datasets/issues/870
null
false
746,495,711
869
Update ner datasets infos
closed
[]
2020-11-19T11:28:03
2020-11-19T14:14:18
2020-11-19T14:14:17
Update the dataset_infos.json files for changes made in #850 regarding the ner datasets feature types (and the change to ClassLabel) I also fixed the ner types of conll2003
lhoestq
https://github.com/huggingface/datasets/pull/869
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/869", "html_url": "https://github.com/huggingface/datasets/pull/869", "diff_url": "https://github.com/huggingface/datasets/pull/869.diff", "patch_url": "https://github.com/huggingface/datasets/pull/869.patch", "merged_at": "2020-11-19T14:14:17"...
true
745,889,882
868
Consistent metric outputs
closed
[]
2020-11-18T18:05:59
2023-09-24T09:50:25
2023-07-11T09:35:52
To automate the use of metrics, they should return consistent outputs. In particular I'm working on adding a conversion of metrics to keras metrics. To achieve this we need two things: - have each metric return dictionaries of string -> floats since each keras metrics should return one float - define in the metric ...
lhoestq
https://github.com/huggingface/datasets/pull/868
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/868", "html_url": "https://github.com/huggingface/datasets/pull/868", "diff_url": "https://github.com/huggingface/datasets/pull/868.diff", "patch_url": "https://github.com/huggingface/datasets/pull/868.patch", "merged_at": null }
true
745,773,955
867
Fix some metrics feature types
closed
[]
2020-11-18T15:46:11
2020-11-19T17:35:58
2020-11-19T17:35:57
Replace `int` feature type to `int32` since `int` is not a pyarrow dtype in those metrics: - accuracy - precision - recall - f1 I also added the sklearn citation and used keyword arguments to remove future warnings
lhoestq
https://github.com/huggingface/datasets/pull/867
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/867", "html_url": "https://github.com/huggingface/datasets/pull/867", "diff_url": "https://github.com/huggingface/datasets/pull/867.diff", "patch_url": "https://github.com/huggingface/datasets/pull/867.patch", "merged_at": "2020-11-19T17:35:57"...
true
745,719,222
866
OSCAR from Inria group
closed
[]
2020-11-18T14:40:54
2020-11-18T15:01:30
2020-11-18T15:01:30
## Adding a Dataset - **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/). - **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by la...
jchwenger
https://github.com/huggingface/datasets/issues/866
null
false
745,430,497
865
Have Trouble importing `datasets`
closed
[]
2020-11-18T08:04:41
2020-11-18T08:16:35
2020-11-18T08:16:35
I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets. I cloned the newest version of datasets (master branch), and do `pip install -e .`. Then, `import datasets` causes the error below. ``` ~/workspace/Clone/datasets/src/datasets/utils/file_utils.py in ...
forest1988
https://github.com/huggingface/datasets/issues/865
null
false
745,322,357
864
Unable to download cnn_dailymail dataset
closed
[]
2020-11-18T04:38:02
2020-11-20T05:22:11
2020-11-20T05:22:10
### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` -------------------------------------------------------------...
rohitashwa1907
https://github.com/huggingface/datasets/issues/864
null
false
744,954,534
863
Add clear_cache parameter in the test command
closed
[]
2020-11-17T17:52:29
2020-11-18T14:44:25
2020-11-18T14:44:24
For certain datasets like OSCAR #348 there are lots of different configurations and each one of them can take a lot of disk space. I added a `--clear_cache` flag to the `datasets-cli test` command to be able to clear the cache after each configuration test to avoid filling up the disk. It should enable an easier gen...
lhoestq
https://github.com/huggingface/datasets/pull/863
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/863", "html_url": "https://github.com/huggingface/datasets/pull/863", "diff_url": "https://github.com/huggingface/datasets/pull/863.diff", "patch_url": "https://github.com/huggingface/datasets/pull/863.patch", "merged_at": "2020-11-18T14:44:24"...
true
744,906,131
862
Update head requests
closed
[]
2020-11-17T16:49:06
2020-11-18T14:43:53
2020-11-18T14:43:50
Get requests and Head requests didn't have the same parameters.
lhoestq
https://github.com/huggingface/datasets/pull/862
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/862", "html_url": "https://github.com/huggingface/datasets/pull/862", "diff_url": "https://github.com/huggingface/datasets/pull/862.diff", "patch_url": "https://github.com/huggingface/datasets/pull/862.patch", "merged_at": "2020-11-18T14:43:50"...
true
744,753,458
861
Possible Bug: Small training/dataset file creates gigantic output
closed
[]
2020-11-17T13:48:59
2021-03-30T14:04:04
2021-03-22T12:04:55
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r...
NebelAI
https://github.com/huggingface/datasets/issues/861
null
false
744,750,691
860
wmt16 cs-en does not donwload
closed
[]
2020-11-17T13:45:35
2022-10-05T12:27:00
2022-10-05T12:26:59
Hi I am trying with wmt16, cs-en pair, thanks for the help, perhaps similar to the ro-en issue. thanks split="train", n_obs=data_args.n_train) for task in data_args.task} File "finetune_t5_trainer.py", line 109, in <dictcomp> split="train", n_obs=data_args.n_train) for task in data_args.task} File "/hom...
rabeehk
https://github.com/huggingface/datasets/issues/860
null
false
743,917,091
859
Integrate file_lock inside the lib for better logging control
closed
[]
2020-11-16T15:13:39
2020-11-16T17:06:44
2020-11-16T17:06:42
Previously the locking system of the lib was based on the file_lock package. However as noticed in #812 there were too many logs printed even when the datasets logging was set to warnings or errors. For example ```python import logging logging.basicConfig(level=logging.INFO) import datasets datasets.set_verbo...
lhoestq
https://github.com/huggingface/datasets/pull/859
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/859", "html_url": "https://github.com/huggingface/datasets/pull/859", "diff_url": "https://github.com/huggingface/datasets/pull/859.diff", "patch_url": "https://github.com/huggingface/datasets/pull/859.patch", "merged_at": "2020-11-16T17:06:42"...
true
743,904,516
858
Add SemEval-2010 task 8
closed
[]
2020-11-16T14:57:57
2020-11-26T17:28:55
2020-11-26T17:28:55
Hi, I don't know how to add dummy data, since I create the validation set out of the last 1000 examples of the train set. If you have a suggestion, I am happy to implement it. Cheers, Joel
JoelNiklaus
https://github.com/huggingface/datasets/pull/858
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/858", "html_url": "https://github.com/huggingface/datasets/pull/858", "diff_url": "https://github.com/huggingface/datasets/pull/858.diff", "patch_url": "https://github.com/huggingface/datasets/pull/858.patch", "merged_at": "2020-11-26T17:28:55"...
true
743,863,214
857
Use pandas reader in csv
closed
[]
2020-11-16T14:05:45
2020-11-19T17:35:40
2020-11-19T17:35:38
The pyarrow CSV reader has issues that the pandas one doesn't (see #836 ). To fix that I switched to the pandas csv reader. The new reader is compatible with all the pandas parameters to read csv files. Moreover it reads csv by chunk in order to save RAM, while the pyarrow one loads everything in memory. Fix #836...
lhoestq
https://github.com/huggingface/datasets/pull/857
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/857", "html_url": "https://github.com/huggingface/datasets/pull/857", "diff_url": "https://github.com/huggingface/datasets/pull/857.diff", "patch_url": "https://github.com/huggingface/datasets/pull/857.patch", "merged_at": "2020-11-19T17:35:38"...
true
743,799,239
856
Add open book corpus
closed
[]
2020-11-16T12:30:02
2024-01-04T13:20:51
2020-11-17T15:22:18
Adds book corpus based on Shawn Presser's [work](https://github.com/soskek/bookcorpus/issues/27) @richarddwang, the author of the original BookCorpus dataset, suggested it should be named [OpenBookCorpus](https://github.com/huggingface/datasets/issues/486). I named it BookCorpusOpen to be easily located alphabetically...
vblagoje
https://github.com/huggingface/datasets/pull/856
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/856", "html_url": "https://github.com/huggingface/datasets/pull/856", "diff_url": "https://github.com/huggingface/datasets/pull/856.diff", "patch_url": "https://github.com/huggingface/datasets/pull/856.patch", "merged_at": "2020-11-17T15:22:17"...
true
743,690,839
855
Fix kor nli csv reader
closed
[]
2020-11-16T09:53:41
2020-11-16T13:59:14
2020-11-16T13:59:12
The kor_nli dataset had an issue with the csv reader that was not able to parse the lines correctly. Some lines were merged together for some reason. I fixed that by iterating through the lines directly instead of using a csv reader. I also changed the feature names to match the other NLI datasets (i.e. use "premise"...
lhoestq
https://github.com/huggingface/datasets/pull/855
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/855", "html_url": "https://github.com/huggingface/datasets/pull/855", "diff_url": "https://github.com/huggingface/datasets/pull/855.diff", "patch_url": "https://github.com/huggingface/datasets/pull/855.patch", "merged_at": "2020-11-16T13:59:12"...
true
743,675,376
854
wmt16 does not download
closed
[]
2020-11-16T09:31:51
2022-10-05T12:27:42
2022-10-05T12:27:42
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/...
rabeehk
https://github.com/huggingface/datasets/issues/854
null
false
743,426,583
853
concatenate_datasets support axis=0 or 1 ?
closed
[]
2020-11-16T02:46:23
2021-04-19T16:07:18
2021-04-19T16:07:18
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
renqingcolin
https://github.com/huggingface/datasets/issues/853
null
false
743,396,240
852
wmt cannot be downloaded
closed
[]
2020-11-16T01:04:41
2020-11-16T09:31:58
2020-11-16T09:31:58
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/...
rabeehk
https://github.com/huggingface/datasets/issues/852
null
false
742,369,419
850
Create ClassLabel for labelling tasks datasets
closed
[]
2020-11-13T11:07:22
2020-11-16T10:32:05
2020-11-16T10:31:58
This PR adds a specific `ClassLabel` for the datasets that are about a labelling task such as POS, NER or Chunking.
jplu
https://github.com/huggingface/datasets/pull/850
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/850", "html_url": "https://github.com/huggingface/datasets/pull/850", "diff_url": "https://github.com/huggingface/datasets/pull/850.diff", "patch_url": "https://github.com/huggingface/datasets/pull/850.patch", "merged_at": "2020-11-16T10:31:58"...
true
742,263,333
849
Load amazon dataset
closed
[]
2020-11-13T08:34:24
2020-11-17T07:22:59
2020-11-17T07:22:59
Hi, I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset. Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews) ``` from datasets import load_dataset dataset = load_dataset("amaz...
bhavitvyamalik
https://github.com/huggingface/datasets/issues/849
null
false
742,240,942
848
Error when concatenate_datasets
closed
[]
2020-11-13T07:56:02
2020-11-13T17:40:59
2020-11-13T15:55:10
Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------...
shexuan
https://github.com/huggingface/datasets/issues/848
null
false
742,179,495
847
multiprocessing in dataset map "can only test a child process"
closed
[]
2020-11-13T06:01:04
2022-10-05T12:22:51
2022-10-05T12:22:51
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` -------------------------...
timothyjlaurent
https://github.com/huggingface/datasets/issues/847
null
false
741,885,174
846
Add HoVer multi-hop fact verification dataset
closed
[]
2020-11-12T19:55:46
2020-12-10T21:47:33
2020-12-10T21:47:33
## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** There are still few multi-hop information extraction...
yjernite
https://github.com/huggingface/datasets/issues/846
null
false
741,841,350
845
amazon description fields as bullets
closed
[]
2020-11-12T18:50:41
2020-11-12T18:50:54
2020-11-12T18:50:54
One more minor formatting change to amazon reviews's description (in addition to #844). Just reformatting the fields to display as a bulleted list in markdown.
joeddav
https://github.com/huggingface/datasets/pull/845
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/845", "html_url": "https://github.com/huggingface/datasets/pull/845", "diff_url": "https://github.com/huggingface/datasets/pull/845.diff", "patch_url": "https://github.com/huggingface/datasets/pull/845.patch", "merged_at": "2020-11-12T18:50:54"...
true
741,835,661
844
add newlines to amazon desc
closed
[]
2020-11-12T18:41:20
2020-11-12T18:42:25
2020-11-12T18:42:21
Just a quick formatting fix to hopefully make it render nicer on Viewer
joeddav
https://github.com/huggingface/datasets/pull/844
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/844", "html_url": "https://github.com/huggingface/datasets/pull/844", "diff_url": "https://github.com/huggingface/datasets/pull/844.diff", "patch_url": "https://github.com/huggingface/datasets/pull/844.patch", "merged_at": "2020-11-12T18:42:21"...
true
741,531,121
843
use_custom_baseline still produces errors for bertscore
closed
[]
2020-11-12T11:44:32
2024-05-28T16:30:17
2021-02-09T14:21:48
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"...
penatbater
https://github.com/huggingface/datasets/issues/843
null
false
741,208,428
842
How to enable `.map()` pre-processing pipelines to support multi-node parallelism?
open
[]
2020-11-12T02:04:38
2025-03-26T09:10:22
null
Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other ...
shangw-nvidia
https://github.com/huggingface/datasets/issues/842
null
false
740,737,448
841
Can not reuse datasets already downloaded
closed
[]
2020-11-11T12:42:15
2020-11-11T18:17:16
2020-11-11T18:17:16
Hello, I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on). I successfully downloaded and reuse the wikipedia datasets in a frontal node. When I connect to the gpu node, I supposed to use the downloaded datasets from cache, but...
jc-hou
https://github.com/huggingface/datasets/issues/841
null
false
740,632,771
840
Update squad_v2.py
closed
[]
2020-11-11T09:58:41
2020-11-11T15:29:34
2020-11-11T15:26:35
Change lines 100 and 102 to prevent overwriting ```predictions``` variable.
Javier-Jimenez99
https://github.com/huggingface/datasets/pull/840
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/840", "html_url": "https://github.com/huggingface/datasets/pull/840", "diff_url": "https://github.com/huggingface/datasets/pull/840.diff", "patch_url": "https://github.com/huggingface/datasets/pull/840.patch", "merged_at": "2020-11-11T15:26:35"...
true
740,355,270
839
XSum dataset missing spaces between sentences
open
[]
2020-11-11T00:34:43
2020-11-11T00:34:43
null
I noticed that the XSum dataset has no space between sentences. This could lead to worse results for anyone training or testing on it. Here's an example (0th entry in the test set): `The London trio are up for best UK act and best album, as well as getting two nominations in the best song category."We got told like ...
loganlebanoff
https://github.com/huggingface/datasets/issues/839
null
false
740,328,382
838
CNN/Dailymail Dataset Card
closed
[]
2020-11-10T23:56:43
2020-11-25T21:09:51
2020-11-25T21:09:50
Link to the card page: https://github.com/mcmillanmajora/datasets/tree/cnn_dailymail_card/datasets/cnn_dailymail One of the questions this dataset brings up is how we want to handle versioning of the cards to mirror versions of the dataset. The different versions of this dataset are used for different tasks (which may...
mcmillanmajora
https://github.com/huggingface/datasets/pull/838
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/838", "html_url": "https://github.com/huggingface/datasets/pull/838", "diff_url": "https://github.com/huggingface/datasets/pull/838.diff", "patch_url": "https://github.com/huggingface/datasets/pull/838.patch", "merged_at": "2020-11-25T21:09:50"...
true
740,250,215
837
AlloCiné dataset card
closed
[]
2020-11-10T21:19:53
2020-11-25T21:56:27
2020-11-25T21:56:27
Link to the card page: https://github.com/mcmillanmajora/datasets/blob/allocine_card/datasets/allocine/README.md There wasn't as much information available for this dataset, so I'm wondering what's the best way to address open questions about the dataset. For example, where did the list of films that the dataset creat...
mcmillanmajora
https://github.com/huggingface/datasets/pull/837
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/837", "html_url": "https://github.com/huggingface/datasets/pull/837", "diff_url": "https://github.com/huggingface/datasets/pull/837.diff", "patch_url": "https://github.com/huggingface/datasets/pull/837.patch", "merged_at": "2020-11-25T21:56:27"...
true
740,187,613
836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
closed
[]
2020-11-10T19:35:40
2021-11-24T16:59:19
2020-11-19T17:35:38
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-...
randubin
https://github.com/huggingface/datasets/issues/836
null
false
740,102,210
835
Wikipedia postprocessing
closed
[]
2020-11-10T17:26:38
2020-11-10T18:23:20
2020-11-10T17:49:21
Hi, thanks for this library! Running this code: ```py import datasets wikipedia = datasets.load_dataset("wikipedia", "20200501.de") print(wikipedia['train']['text'][0]) ``` I get: ``` mini|Ricardo Flores Magón mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, gegen die Diktatur von Porfir...
bminixhofer
https://github.com/huggingface/datasets/issues/835
null
false
740,082,890
834
[GEM] add WikiLingua cross-lingual abstractive summarization dataset
closed
[]
2020-11-10T17:00:43
2021-04-15T12:04:09
2021-04-15T12:01:38
## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article. - **Paper:** h...
yjernite
https://github.com/huggingface/datasets/issues/834
null
false
740,079,692
833
[GEM] add ASSET text simplification dataset
closed
[]
2020-11-10T16:56:30
2020-12-03T13:38:15
2020-12-03T13:38:15
## Adding a Dataset - **Name:** ASSET - **Description:** ASSET is a crowdsourced multi-reference corpus for assessing sentence simplification in English where each simplification was produced by executing several rewriting transformations. - **Paper:** https://www.aclweb.org/anthology/2020.acl-main.424.pdf - **Dat...
yjernite
https://github.com/huggingface/datasets/issues/833
null
false
740,077,228
832
[GEM] add WikiAuto text simplification dataset
closed
[]
2020-11-10T16:53:23
2020-12-03T13:38:08
2020-12-03T13:38:08
## Adding a Dataset - **Name:** WikiAuto - **Description:** Sentences in English Wikipedia and their corresponding sentences in Simple English Wikipedia that are written with simpler grammar and word choices. A lot of lexical and syntactic paraphrasing. - **Paper:** https://www.aclweb.org/anthology/2020.acl-main.70...
yjernite
https://github.com/huggingface/datasets/issues/832
null
false
740,071,697
831
[GEM] Add WebNLG dataset
closed
[]
2020-11-10T16:46:48
2020-12-03T13:38:01
2020-12-03T13:38:01
## Adding a Dataset - **Name:** WebNLG - **Description:** WebNLG consists of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples (16,095 data inputs and 42,873 data-text pairs). The data is available in English and Russian - **Paper:** https://ww...
yjernite
https://github.com/huggingface/datasets/issues/831
null
false
740,065,376
830
[GEM] add ToTTo Table-to-text dataset
closed
[]
2020-11-10T16:38:34
2020-12-10T13:06:02
2020-12-10T13:06:01
## Adding a Dataset - **Name:** ToTTo - **Description:** ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description. - **Paper:** https://arxiv.o...
yjernite
https://github.com/huggingface/datasets/issues/830
null
false
740,061,699
829
[GEM] add Schema-Guided Dialogue
closed
[]
2020-11-10T16:33:44
2020-12-03T13:37:50
2020-12-03T13:37:50
## Adding a Dataset - **Name:** The Schema-Guided Dialogue Dataset - **Description:** The Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 d...
yjernite
https://github.com/huggingface/datasets/issues/829
null
false
740,008,683
828
Add writer_batch_size attribute to GeneratorBasedBuilder
closed
[]
2020-11-10T15:28:19
2020-11-10T16:27:36
2020-11-10T16:27:36
As specified in #741 one would need to specify a custom ArrowWriter batch size to avoid filling the RAM. Indeed the defaults buffer size is 10 000 examples but for multimodal datasets that contain images or videos we may want to reduce that.
lhoestq
https://github.com/huggingface/datasets/pull/828
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/828", "html_url": "https://github.com/huggingface/datasets/pull/828", "diff_url": "https://github.com/huggingface/datasets/pull/828.diff", "patch_url": "https://github.com/huggingface/datasets/pull/828.patch", "merged_at": "2020-11-10T16:27:35"...
true
739,983,024
827
[GEM] MultiWOZ dialogue dataset
closed
[]
2020-11-10T14:57:50
2022-10-05T12:31:13
2022-10-05T12:31:13
## Adding a Dataset - **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz) - **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts – there are no annotations from the user...
yjernite
https://github.com/huggingface/datasets/issues/827
null
false
739,976,716
826
[GEM] Add E2E dataset
closed
[]
2020-11-10T14:50:40
2020-12-03T13:37:57
2020-12-03T13:37:57
## Adding a Dataset - **Name:** E2E NLG dataset (for End-to-end natural language generation) - **Description:**a dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, the datasets consists of 5,751 dialogue-act Meaning Representations (structured data) and 8.1 refer...
yjernite
https://github.com/huggingface/datasets/issues/826
null
false