id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
βŒ€
body
stringlengths
0
228k
βŒ€
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
739,925,960
825
Add accuracy, precision, recall and F1 metrics
closed
[]
2020-11-10T13:50:35
2020-11-11T19:23:48
2020-11-11T19:23:43
This PR adds several single metrics, namely: - Accuracy - Precision - Recall - F1 They all uses under the hood the sklearn metrics of the same name. They allow different useful features when training a multilabel/multiclass model: - have a macro/micro/per label/weighted/binary/per sample score - score only t...
jplu
https://github.com/huggingface/datasets/pull/825
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/825", "html_url": "https://github.com/huggingface/datasets/pull/825", "diff_url": "https://github.com/huggingface/datasets/pull/825.diff", "patch_url": "https://github.com/huggingface/datasets/pull/825.patch", "merged_at": "2020-11-11T19:23:43"...
true
739,896,526
824
Discussion using datasets in offline mode
closed
[]
2020-11-10T13:10:51
2023-10-26T09:26:26
2022-02-15T10:32:36
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some point...
mandubian
https://github.com/huggingface/datasets/issues/824
null
false
739,815,763
823
how processing in batch works in datasets
closed
[]
2020-11-10T11:11:17
2020-11-10T13:11:10
2020-11-10T13:11:09
Hi, I need to process my datasets before it is passed to dataloader in batch, here is my codes ``` class AbstractTask(ABC): task_name: str = NotImplemented preprocessor: Callable = NotImplemented split_to_data_split: Mapping[str, str] = NotImplemented tokenizer: Callable = NotImplemented ...
rabeehkarimimahabadi
https://github.com/huggingface/datasets/issues/823
null
false
739,579,314
822
datasets freezes
closed
[]
2020-11-10T05:10:19
2023-07-20T16:08:14
2023-07-20T16:08:13
Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks dataset1 = load_dataset("squad", split="train[:10]") dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question']) dataset2 = load_datase...
rabeehkarimimahabadi
https://github.com/huggingface/datasets/issues/822
null
false
739,506,859
821
`kor_nli` dataset doesn't being loaded properly
closed
[]
2020-11-10T02:04:12
2020-11-16T13:59:12
2020-11-16T13:59:12
There are two issues from `kor_nli` dataset 1. csv.DictReader failed to split features by tab - Should not exist `None` value in label feature, but there it is. ```python kor_nli_train['train'].unique('gold_label') # ['neutral', 'entailment', 'contradiction', None] ``` -...
sackoh
https://github.com/huggingface/datasets/issues/821
null
false
739,387,617
820
Update quail dataset to v1.3
closed
[]
2020-11-09T21:49:26
2020-11-10T09:06:35
2020-11-10T09:06:35
Updated quail to most recent version, to address the problem originally discussed [here](https://github.com/huggingface/datasets/issues/806).
ngdodd
https://github.com/huggingface/datasets/pull/820
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/820", "html_url": "https://github.com/huggingface/datasets/pull/820", "diff_url": "https://github.com/huggingface/datasets/pull/820.diff", "patch_url": "https://github.com/huggingface/datasets/pull/820.patch", "merged_at": "2020-11-10T09:06:35"...
true
739,250,624
819
Make save function use deterministic global vars order
closed
[]
2020-11-09T18:12:03
2021-11-30T13:34:09
2020-11-11T15:20:51
The `dumps` function need to be deterministic for the caching mechanism. However in #816 I noticed that one of dill's method to recursively check the globals of a function may return the globals in different orders each time it's used. To fix that I sort the globals by key in the `globs` dictionary. I had to add a re...
lhoestq
https://github.com/huggingface/datasets/pull/819
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/819", "html_url": "https://github.com/huggingface/datasets/pull/819", "diff_url": "https://github.com/huggingface/datasets/pull/819.diff", "patch_url": "https://github.com/huggingface/datasets/pull/819.patch", "merged_at": "2020-11-11T15:20:50"...
true
739,173,861
818
Fix type hints pickling in python 3.6
closed
[]
2020-11-09T16:27:47
2020-11-10T09:07:03
2020-11-10T09:07:02
Type hints can't be properly pickled in python 3.6. This was causing errors the `run_mlm.py` script from `transformers` with python 3.6 However Cloupickle proposed a [fix](https://github.com/cloudpipe/cloudpickle/pull/318/files) to make it work anyway. The idea is just to implement the pickling/unpickling of parame...
lhoestq
https://github.com/huggingface/datasets/pull/818
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/818", "html_url": "https://github.com/huggingface/datasets/pull/818", "diff_url": "https://github.com/huggingface/datasets/pull/818.diff", "patch_url": "https://github.com/huggingface/datasets/pull/818.patch", "merged_at": "2020-11-10T09:07:01"...
true
739,145,369
817
Add MRQA dataset
closed
[]
2020-11-09T15:52:19
2020-12-04T15:44:42
2020-12-04T15:44:41
## Adding a Dataset - **Name:** MRQA - **Description:** Collection of different (subsets of) QA datasets all converted to the same format to evaluate out-of-domain generalization (the datasets come from different domains, distributions, etc.). Some datasets are used for training and others are used for evaluation. Th...
VictorSanh
https://github.com/huggingface/datasets/issues/817
null
false
739,102,686
816
[Caching] Dill globalvars() output order is not deterministic and can cause cache issues.
closed
[]
2020-11-09T15:01:20
2020-11-11T15:20:50
2020-11-11T15:20:50
Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues. To fix that one could register an implementati...
lhoestq
https://github.com/huggingface/datasets/issues/816
null
false
738,842,092
815
Is dataset iterative or not?
closed
[]
2020-11-09T09:11:48
2020-11-10T10:50:03
2020-11-10T10:50:03
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
rabeehkarimimahabadi
https://github.com/huggingface/datasets/issues/815
null
false
738,500,443
814
Joining multiple datasets
closed
[]
2020-11-08T16:19:30
2020-11-08T19:38:48
2020-11-08T19:38:48
Hi I have multiple iterative datasets from your library with different size and I want to join them in a way that each datasets is sampled equally, so smaller datasets more, larger one less, could you tell me how to implement this in pytorch? thanks
rabeehkarimimahabadi
https://github.com/huggingface/datasets/issues/814
null
false
738,489,852
813
How to implement DistributedSampler with datasets
closed
[]
2020-11-08T15:27:11
2022-10-05T12:54:23
2022-10-05T12:54:23
Hi, I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them. I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using d...
rabeehkarimimahabadi
https://github.com/huggingface/datasets/issues/813
null
false
738,340,217
812
Too much logging
closed
[]
2020-11-07T23:56:30
2021-01-26T14:31:34
2020-11-16T17:06:42
I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1...
dspoka
https://github.com/huggingface/datasets/issues/812
null
false
738,280,132
811
nlp viewer error
closed
[]
2020-11-07T17:08:58
2022-02-15T10:51:44
2022-02-14T15:24:20
Hello, when I select amazon_us_reviews in nlp viewer, it shows error. https://huggingface.co/nlp/viewer/?dataset=amazon_us_reviews ![image](https://user-images.githubusercontent.com/30210529/98447334-4aa81200-2124-11eb-9dca-82c3ab34ccc2.png)
jc-hou
https://github.com/huggingface/datasets/issues/811
null
false
737,878,370
810
Fix seqeval metric
closed
[]
2020-11-06T16:11:43
2020-11-09T14:04:29
2020-11-09T14:04:28
The current seqeval metric returns the following error when computed: ``` ~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/78a944d83252b5a16c9a2e49f057f4c6e02f18cc03349257025a8c9aea6524d8/seqeval.py in _compute(self, predictions, references, suffix) 102 scores = {} 103 for type_...
sgugger
https://github.com/huggingface/datasets/pull/810
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/810", "html_url": "https://github.com/huggingface/datasets/pull/810", "diff_url": "https://github.com/huggingface/datasets/pull/810.diff", "patch_url": "https://github.com/huggingface/datasets/pull/810.patch", "merged_at": "2020-11-09T14:04:27"...
true
737,832,701
809
Add Google Taskmaster dataset
closed
[]
2020-11-06T15:10:41
2021-04-20T13:09:26
2021-04-20T13:09:26
## Adding a Dataset - **Name:** Taskmaster - **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations) - **Paper:** https://arxiv.org/abs/1909.05358 - **Data:** https://github.com/google-research-datasets/Taskmaster - **Motivation...
yjernite
https://github.com/huggingface/datasets/issues/809
null
false
737,638,942
808
dataset(dgs): initial dataset loading script
closed
[]
2020-11-06T10:14:43
2021-03-23T06:18:55
2021-03-23T06:18:55
When trying to create dummy data I get: > Dataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has t o be created with less guidance. ...
AmitMY
https://github.com/huggingface/datasets/pull/808
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/808", "html_url": "https://github.com/huggingface/datasets/pull/808", "diff_url": "https://github.com/huggingface/datasets/pull/808.diff", "patch_url": "https://github.com/huggingface/datasets/pull/808.patch", "merged_at": null }
true
737,509,954
807
load_dataset for LOCAL CSV files report CONNECTION ERROR
closed
[]
2020-11-06T06:33:04
2021-01-11T01:30:27
2020-11-14T05:30:34
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=Fal...
shexuan
https://github.com/huggingface/datasets/issues/807
null
false
737,215,430
806
Quail dataset urls are out of date
closed
[]
2020-11-05T19:40:19
2020-11-10T14:02:51
2020-11-10T14:02:51
<h3>Code</h3> ``` from datasets import load_dataset quail = load_dataset('quail') ``` <h3>Error</h3> ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml ``` As per [quail v1.3 commit](https://github.co...
ngdodd
https://github.com/huggingface/datasets/issues/806
null
false
737,019,360
805
On loading a metric from datasets, I get the following error
closed
[]
2020-11-05T15:14:38
2022-02-14T15:32:59
2022-02-14T15:32:59
`from datasets import load_metric` `metric = load_metric('bleurt')` Traceback: 210 class _ArrayXDExtensionType(pa.PyExtensionType): 211 212 ndims: int = None AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' Any help will be appreciated. Thank you.
laibamehnaz
https://github.com/huggingface/datasets/issues/805
null
false
736,858,507
804
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa')
closed
[]
2020-11-05T11:38:01
2020-11-09T14:14:59
2020-11-09T14:14:58
# The issue It's all in the title, it appears to be fine on the train and validation sets. Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ? # How to reproduce ```py from datasets import load_dataset kilt_tas...
PaulLerner
https://github.com/huggingface/datasets/issues/804
null
false
736,818,917
803
fix: typos in tutorial to map KILT and TriviaQA
closed
[]
2020-11-05T10:42:00
2020-11-10T09:08:07
2020-11-10T09:08:07
PaulLerner
https://github.com/huggingface/datasets/pull/803
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/803", "html_url": "https://github.com/huggingface/datasets/pull/803", "diff_url": "https://github.com/huggingface/datasets/pull/803.diff", "patch_url": "https://github.com/huggingface/datasets/pull/803.patch", "merged_at": "2020-11-10T09:08:07"...
true
736,296,343
802
Add XGlue
closed
[]
2020-11-04T17:29:54
2022-04-28T08:15:36
2020-12-01T15:58:27
Dataset is ready to merge. An important feature of this dataset is that for each config the train data is in English, while dev and test data are in multiple languages. Therefore, @lhoestq and I decided offline that we will give the dataset the following API, *e.g.* for ```python load_dataset("xglue", "ner") # wo...
patrickvonplaten
https://github.com/huggingface/datasets/pull/802
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/802", "html_url": "https://github.com/huggingface/datasets/pull/802", "diff_url": "https://github.com/huggingface/datasets/pull/802.diff", "patch_url": "https://github.com/huggingface/datasets/pull/802.patch", "merged_at": "2020-12-01T15:58:27"...
true
735,790,876
801
How to join two datasets?
closed
[]
2020-11-04T03:53:11
2020-12-23T14:02:58
2020-12-23T14:02:58
Hi, I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels? I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is...
shangw-nvidia
https://github.com/huggingface/datasets/issues/801
null
false
735,772,775
800
Update loading_metrics.rst
closed
[]
2020-11-04T02:57:11
2020-11-11T15:28:32
2020-11-11T15:28:32
Minor bug
ayushidalmia
https://github.com/huggingface/datasets/pull/800
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/800", "html_url": "https://github.com/huggingface/datasets/pull/800", "diff_url": "https://github.com/huggingface/datasets/pull/800.diff", "patch_url": "https://github.com/huggingface/datasets/pull/800.patch", "merged_at": "2020-11-11T15:28:32"...
true
735,551,165
799
switch amazon reviews class label order
closed
[]
2020-11-03T18:38:58
2020-11-03T18:44:14
2020-11-03T18:44:10
Switches the label order to be more intuitive for amazon reviews, #791.
joeddav
https://github.com/huggingface/datasets/pull/799
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/799", "html_url": "https://github.com/huggingface/datasets/pull/799", "diff_url": "https://github.com/huggingface/datasets/pull/799.diff", "patch_url": "https://github.com/huggingface/datasets/pull/799.patch", "merged_at": "2020-11-03T18:44:10"...
true
735,518,805
798
Cannot load TREC dataset: ConnectionError
closed
[]
2020-11-03T17:45:22
2022-02-14T15:34:22
2022-02-14T15:34:22
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True...
kaletap
https://github.com/huggingface/datasets/issues/798
null
false
735,420,332
797
Token classification labels are strings and we don't have the list of labels
closed
[]
2020-11-03T15:33:30
2022-02-14T15:41:54
2022-02-14T15:41:53
Not sure if this is an issue we want to fix or not, putting it here so it's not forgotten. Right now, in token classification datasets, the labels for NER, POS and the likes are typed as `Sequence` of `strings`, which is wrong in my opinion. These should be `Sequence` of `ClassLabel` or some types that gives easy acces...
sgugger
https://github.com/huggingface/datasets/issues/797
null
false
735,198,265
795
Descriptions of raw and processed versions of wikitext are inverted
closed
[]
2020-11-03T10:24:51
2022-02-14T15:46:21
2022-02-14T15:46:21
Nothing of importance, but it looks like the descriptions of wikitext-n-v1 and wikitext-n-raw-v1 are inverted for both n=2 and n=103. I just verified by loading them and the `<unk>` tokens are present in the non-raw versions, which confirms that it's a mere inversion of the descriptions and not of the datasets themselv...
fraboniface
https://github.com/huggingface/datasets/issues/795
null
false
735,158,725
794
self.options cannot be converted to a Python object for pickling
closed
[]
2020-11-03T09:27:34
2020-11-19T17:35:38
2020-11-19T17:35:38
Hi, Currently I am trying to load csv file with customized read_options. And the latest master seems broken if we pass the ReadOptions object. Here is a code snippet ```python from datasets import load_dataset from pyarrow.csv import ReadOptions load_dataset("csv", data_files=["out.csv"], read_options=ReadOpt...
hzqjyyx
https://github.com/huggingface/datasets/issues/794
null
false
735,105,907
793
[Datasets] fix discofuse links
closed
[]
2020-11-03T08:03:45
2020-11-03T08:16:41
2020-11-03T08:16:40
The discofuse links were changed: https://github.com/google-research-datasets/discofuse/commit/d27641016eb5b3eb2af03c7415cfbb2cbebe8558. The old links are broken I changed the links and created the new dataset_infos.json. Pinging @thomwolf @lhoestq for notification.
patrickvonplaten
https://github.com/huggingface/datasets/pull/793
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/793", "html_url": "https://github.com/huggingface/datasets/pull/793", "diff_url": "https://github.com/huggingface/datasets/pull/793.diff", "patch_url": "https://github.com/huggingface/datasets/pull/793.patch", "merged_at": "2020-11-03T08:16:40"...
true
734,693,652
792
KILT dataset: empty string in triviaqa input field
closed
[]
2020-11-02T17:33:54
2020-11-05T10:34:59
2020-11-05T10:34:59
# What happened Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark) # Versions KILT version is `1.0.0` `datasets` version is `1.1.2` [more here](https://gist.github.com/Pa...
PaulLerner
https://github.com/huggingface/datasets/issues/792
null
false
734,656,518
791
add amazon reviews
closed
[]
2020-11-02T16:42:57
2020-11-03T20:15:06
2020-11-03T16:43:57
Adds the Amazon US Reviews dataset as requested in #353. Converted from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/amazon_us_reviews). cc @clmnt @sshleifer
joeddav
https://github.com/huggingface/datasets/pull/791
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/791", "html_url": "https://github.com/huggingface/datasets/pull/791", "diff_url": "https://github.com/huggingface/datasets/pull/791.diff", "patch_url": "https://github.com/huggingface/datasets/pull/791.patch", "merged_at": "2020-11-03T16:43:57"...
true
734,470,197
790
Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist
closed
[]
2020-11-02T12:36:35
2020-11-10T14:05:02
2020-11-10T14:05:02
I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error. ```sh git clone https://github.com/huggingface/datasets cd datasets virtualenv venv -p python3 --system-site-packages source venv/bin/activate pip install -e "....
shawwn
https://github.com/huggingface/datasets/issues/790
null
false
734,237,839
789
dataset(ncslgr): add initial loading script
closed
[]
2020-11-02T06:50:10
2020-12-01T13:41:37
2020-12-01T13:41:36
Its a small dataset, but its heavily annotated https://www.bu.edu/asllrp/ncslgr.html ![image](https://user-images.githubusercontent.com/5757359/97838609-3c539380-1ce9-11eb-885b-a15d4c91ea49.png)
AmitMY
https://github.com/huggingface/datasets/pull/789
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/789", "html_url": "https://github.com/huggingface/datasets/pull/789", "diff_url": "https://github.com/huggingface/datasets/pull/789.diff", "patch_url": "https://github.com/huggingface/datasets/pull/789.patch", "merged_at": null }
true
734,136,124
788
failed to reuse cache
closed
[]
2020-11-02T02:42:36
2020-11-02T12:26:15
2020-11-02T12:26:15
I packed the `load_dataset ` in a function of class, and cached data in a directory. But when I import the class and use the function, the data still have to be downloaded again. The information (Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown si...
WangHexie
https://github.com/huggingface/datasets/issues/788
null
false
734,070,162
787
Adding nli_tr dataset
closed
[]
2020-11-01T21:49:44
2020-11-12T19:06:02
2020-11-12T19:06:02
Hello, In this pull request, we have implemented the necessary interface to add our recent dataset [NLI-TR](https://github.com/boun-tabi/NLI-TR). The datasets will be presented on a full paper at EMNLP 2020 this month. [[arXiv link] ](https://arxiv.org/pdf/2004.14963.pdf) The dataset is the neural machine transl...
e-budur
https://github.com/huggingface/datasets/pull/787
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/787", "html_url": "https://github.com/huggingface/datasets/pull/787", "diff_url": "https://github.com/huggingface/datasets/pull/787.diff", "patch_url": "https://github.com/huggingface/datasets/pull/787.patch", "merged_at": "2020-11-12T19:06:02"...
true
733,761,717
786
feat(dataset): multiprocessing _generate_examples
closed
[]
2020-10-31T16:52:16
2023-01-16T10:59:13
2023-01-16T10:59:13
forking this out of #741, this issue is only regarding multiprocessing I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool. In my use case...
AmitMY
https://github.com/huggingface/datasets/issues/786
null
false
733,719,419
785
feat(aslg_pc12): add dev and test data splits
closed
[]
2020-10-31T13:25:38
2020-11-10T15:29:30
2020-11-10T15:29:30
For reproducibility sake, it's best if there are defined dev and test splits. The original paper author did not define splits for the entire dataset, not for the sample loaded via this library, so I decided to define: - 5/7th for train - 1/7th for dev - 1/7th for test
AmitMY
https://github.com/huggingface/datasets/pull/785
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/785", "html_url": "https://github.com/huggingface/datasets/pull/785", "diff_url": "https://github.com/huggingface/datasets/pull/785.diff", "patch_url": "https://github.com/huggingface/datasets/pull/785.patch", "merged_at": null }
true
733,700,463
784
Issue with downloading Wikipedia data for low resource language
closed
[]
2020-10-31T11:40:00
2022-02-09T17:50:16
2020-11-25T15:42:13
Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet ``` jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner') su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner') ``` And I get the following error for these tw...
SamuelCahyawijaya
https://github.com/huggingface/datasets/issues/784
null
false
733,536,254
783
updated links to v1.3 of quail, fixed the description
closed
[]
2020-10-30T21:47:33
2020-11-29T23:05:19
2020-11-29T23:05:18
updated links to v1.3 of quail, fixed the description
annargrs
https://github.com/huggingface/datasets/pull/783
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/783", "html_url": "https://github.com/huggingface/datasets/pull/783", "diff_url": "https://github.com/huggingface/datasets/pull/783.diff", "patch_url": "https://github.com/huggingface/datasets/pull/783.patch", "merged_at": null }
true
733,316,463
782
Fix metric deletion when attribuets are missing
closed
[]
2020-10-30T16:16:10
2020-10-30T16:47:53
2020-10-30T16:47:52
When you call `del` on a metric we want to make sure that the arrow attributes are not already deleted. I just added `if hasattr(...)` to make sure it doesn't crash
lhoestq
https://github.com/huggingface/datasets/pull/782
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/782", "html_url": "https://github.com/huggingface/datasets/pull/782", "diff_url": "https://github.com/huggingface/datasets/pull/782.diff", "patch_url": "https://github.com/huggingface/datasets/pull/782.patch", "merged_at": "2020-10-30T16:47:52"...
true
733,168,609
781
Add XNLI train set
closed
[]
2020-10-30T13:21:53
2022-06-09T23:26:46
2020-11-09T18:22:49
I added the train set that was built using the translated MNLI. Now you can load the dataset specifying one language: ```python from datasets import load_dataset xnli_en = load_dataset("xnli", "en") print(xnli_en["train"][0]) # {'hypothesis': 'Product and geography are what make cream skimming work .', 'label':...
lhoestq
https://github.com/huggingface/datasets/pull/781
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/781", "html_url": "https://github.com/huggingface/datasets/pull/781", "diff_url": "https://github.com/huggingface/datasets/pull/781.diff", "patch_url": "https://github.com/huggingface/datasets/pull/781.patch", "merged_at": "2020-11-09T18:22:49"...
true
732,738,647
780
Add ASNQ dataset
closed
[]
2020-10-29T23:31:56
2020-11-10T09:26:23
2020-11-10T09:26:23
This pull request adds the ASNQ dataset. It is a dataset for answer sentence selection derived from Google Natural Questions (NQ) dataset (Kwiatkowski et al. 2019). The dataset details can be found in the paper at https://arxiv.org/abs/1911.04118 The dataset is authored by Siddhant Garg, Thuy Vu and Alessandro Mosch...
mkserge
https://github.com/huggingface/datasets/pull/780
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/780", "html_url": "https://github.com/huggingface/datasets/pull/780", "diff_url": "https://github.com/huggingface/datasets/pull/780.diff", "patch_url": "https://github.com/huggingface/datasets/pull/780.patch", "merged_at": "2020-11-10T09:26:23"...
true
732,514,887
779
Feature/fidelity metrics from emnlp2020 evaluating and characterizing human rationales
closed
[]
2020-10-29T17:31:14
2023-07-11T09:36:30
2023-07-11T09:36:30
This metric computes fidelity (Yu et al. 2019, DeYoung et al. 2019) and normalized fidelity (Carton et al. 2020).
rathoreanirudh
https://github.com/huggingface/datasets/pull/779
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/779", "html_url": "https://github.com/huggingface/datasets/pull/779", "diff_url": "https://github.com/huggingface/datasets/pull/779.diff", "patch_url": "https://github.com/huggingface/datasets/pull/779.patch", "merged_at": null }
true
732,449,652
778
Unexpected behavior when loading cached csv file?
closed
[]
2020-10-29T16:06:10
2020-10-29T21:21:27
2020-10-29T21:21:27
I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be n...
dcfidalgo
https://github.com/huggingface/datasets/issues/778
null
false
732,376,648
777
Better error message for uninitialized metric
closed
[]
2020-10-29T14:42:50
2020-10-29T15:18:26
2020-10-29T15:18:24
When calling `metric.compute()` without having called `metric.add` or `metric.add_batch` at least once, the error was quite cryptic. I added a better error message Fix #729
lhoestq
https://github.com/huggingface/datasets/pull/777
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/777", "html_url": "https://github.com/huggingface/datasets/pull/777", "diff_url": "https://github.com/huggingface/datasets/pull/777.diff", "patch_url": "https://github.com/huggingface/datasets/pull/777.patch", "merged_at": "2020-10-29T15:18:23"...
true
732,343,550
776
Allow custom split names in text dataset
closed
[]
2020-10-29T14:04:06
2020-10-30T13:46:45
2020-10-30T13:23:52
The `text` dataset used to return only splits like train, test and validation. Other splits were ignored. Now any split name is allowed. I did the same for `json`, `pandas` and `csv` Fix #735
lhoestq
https://github.com/huggingface/datasets/pull/776
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/776", "html_url": "https://github.com/huggingface/datasets/pull/776", "diff_url": "https://github.com/huggingface/datasets/pull/776.diff", "patch_url": "https://github.com/huggingface/datasets/pull/776.patch", "merged_at": "2020-10-30T13:23:52"...
true
732,287,504
775
Properly delete metrics when a process is killed
closed
[]
2020-10-29T12:52:07
2020-10-29T14:01:20
2020-10-29T14:01:19
Tests are flaky when using metrics in distributed setup. There is because of one test that make sure that using two possibly incompatible metric computation (same exp id) either works or raises the right error. However if the error is raised, all the processes of the metric are killed, and the open files (arrow + loc...
lhoestq
https://github.com/huggingface/datasets/pull/775
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/775", "html_url": "https://github.com/huggingface/datasets/pull/775", "diff_url": "https://github.com/huggingface/datasets/pull/775.diff", "patch_url": "https://github.com/huggingface/datasets/pull/775.patch", "merged_at": "2020-10-29T14:01:19"...
true
732,265,741
774
[ROUGE] Add description to Rouge metric
closed
[]
2020-10-29T12:19:32
2020-10-29T17:55:50
2020-10-29T17:55:48
Add information about case sensitivity to ROUGE.
patrickvonplaten
https://github.com/huggingface/datasets/pull/774
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/774", "html_url": "https://github.com/huggingface/datasets/pull/774", "diff_url": "https://github.com/huggingface/datasets/pull/774.diff", "patch_url": "https://github.com/huggingface/datasets/pull/774.patch", "merged_at": "2020-10-29T17:55:48"...
true
731,684,153
773
Adding CC-100: Monolingual Datasets from Web Crawl Data
closed
[]
2020-10-28T18:20:41
2022-01-26T13:22:54
2020-12-14T10:20:07
## Adding a Dataset - **Name:** CC-100: Monolingual Datasets from Web Crawl Data - **Description:** https://twitter.com/alex_conneau/status/1321507120848625665 - **Paper:** https://arxiv.org/abs/1911.02116 - **Data:** http://data.statmt.org/cc-100/ - **Motivation:** A large scale multi-lingual language modeling da...
yjernite
https://github.com/huggingface/datasets/issues/773
null
false
731,612,430
772
Fix metric with cache dir
closed
[]
2020-10-28T16:43:13
2020-10-29T09:34:44
2020-10-29T09:34:43
The cache_dir provided by the user was concatenated twice and therefore causing FileNotFound errors. The tests didn't cover the case of providing `cache_dir=` for metrics because of a stupid issue (it was not using the right parameter). I remove the double concatenation and I fixed the tests Fix #728
lhoestq
https://github.com/huggingface/datasets/pull/772
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/772", "html_url": "https://github.com/huggingface/datasets/pull/772", "diff_url": "https://github.com/huggingface/datasets/pull/772.diff", "patch_url": "https://github.com/huggingface/datasets/pull/772.patch", "merged_at": "2020-10-29T09:34:42"...
true
731,482,213
771
Using `Dataset.map` with `n_proc>1` print multiple progress bars
closed
[]
2020-10-28T14:13:27
2023-02-13T20:16:39
2023-02-13T20:16:39
When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed.
sgugger
https://github.com/huggingface/datasets/issues/771
null
false
731,445,222
770
Fix custom builder caching
closed
[]
2020-10-28T13:32:24
2020-10-29T09:36:03
2020-10-29T09:36:01
The cache directory of a dataset didn't take into account additional parameters that the user could specify such as `features` or any parameter of the builder configuration kwargs (ex: `encoding` for the `text` dataset). To fix that, the cache directory name now has a suffix that depends on all of them. Fix #730 ...
lhoestq
https://github.com/huggingface/datasets/pull/770
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/770", "html_url": "https://github.com/huggingface/datasets/pull/770", "diff_url": "https://github.com/huggingface/datasets/pull/770.diff", "patch_url": "https://github.com/huggingface/datasets/pull/770.patch", "merged_at": "2020-10-29T09:36:01"...
true
731,257,104
769
How to choose proper download_mode in function load_dataset?
closed
[]
2020-10-28T09:16:19
2022-02-22T12:22:52
2022-02-22T12:22:52
Hi, I am a beginner to datasets and I try to use datasets to load my csv file. my csv file looks like this ``` text,label "Effective but too-tepid biopic",3 "If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4 "Emerges as something rare , an issue movie that 's so hones...
jzq2000
https://github.com/huggingface/datasets/issues/769
null
false
730,908,060
768
Add a `lazy_map` method to `Dataset` and `DatasetDict`
open
[]
2020-10-27T22:33:03
2020-10-28T08:58:13
null
The library is great, but it would be even more awesome with a `lazy_map` method implemented on `Dataset` and `DatasetDict`. This would apply a function on a give item but when the item is requested. Two use cases: 1. load image on the fly 2. apply a random function and get different outputs at each epoch (like dat...
sgugger
https://github.com/huggingface/datasets/issues/768
null
false
730,771,610
767
Add option for named splits when using ds.train_test_split
open
[]
2020-10-27T19:59:44
2020-11-10T14:05:21
null
### Feature Request πŸš€ Can we add a way to name your splits when using the `.train_test_split` function? In almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Therefore, its kinda useless to get a `test` split back from `tra...
nateraw
https://github.com/huggingface/datasets/issues/767
null
false
730,669,596
766
[GEM] add DART data-to-text generation dataset
closed
[]
2020-10-27T17:34:04
2020-12-03T13:37:18
2020-12-03T13:37:18
## Adding a Dataset - **Name:** DART - **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. - **P...
yjernite
https://github.com/huggingface/datasets/issues/766
null
false
730,668,332
765
[GEM] Add DART data-to-text generation dataset
closed
[]
2020-10-27T17:32:23
2020-10-27T17:34:21
2020-10-27T17:34:21
## Adding a Dataset - **Name:** DART - **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. - **P...
yjernite
https://github.com/huggingface/datasets/issues/765
null
false
730,617,828
764
Adding Issue Template for Dataset Requests
closed
[]
2020-10-27T16:37:08
2020-10-27T17:25:26
2020-10-27T17:25:25
adding .github/ISSUE_TEMPLATE/add-dataset.md
yjernite
https://github.com/huggingface/datasets/pull/764
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/764", "html_url": "https://github.com/huggingface/datasets/pull/764", "diff_url": "https://github.com/huggingface/datasets/pull/764.diff", "patch_url": "https://github.com/huggingface/datasets/pull/764.patch", "merged_at": "2020-10-27T17:25:25"...
true
730,593,631
763
Fixed errors in bertscore related to custom baseline
closed
[]
2020-10-27T16:08:35
2020-10-28T17:59:25
2020-10-28T17:59:25
[bertscore version 0.3.6 ](https://github.com/Tiiiger/bert_score) added support for custom baseline files. This update added extra argument `baseline_path` to BERTScorer class as well as some extra boolean parameters `use_custom_baseline` in functions like `get_hash(model, num_layers, idf, rescale_with_baseline, use_cu...
juanjucm
https://github.com/huggingface/datasets/pull/763
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/763", "html_url": "https://github.com/huggingface/datasets/pull/763", "diff_url": "https://github.com/huggingface/datasets/pull/763.diff", "patch_url": "https://github.com/huggingface/datasets/pull/763.patch", "merged_at": "2020-10-28T17:59:25"...
true
730,586,972
762
[GEM] Add Czech Restaurant data-to-text generation dataset
closed
[]
2020-10-27T16:00:47
2020-12-03T13:37:44
2020-12-03T13:37:44
- Paper: https://www.aclweb.org/anthology/W19-8670.pdf - Data: https://github.com/UFAL-DSG/cs_restaurant_dataset - The dataset will likely be part of the GEM benchmark
yjernite
https://github.com/huggingface/datasets/issues/762
null
false
729,898,867
761
Downloaded datasets are not usable offline
closed
[]
2020-10-26T20:54:46
2022-02-15T10:32:28
2022-02-15T10:32:28
I've been trying to use the IMDB dataset offline, but after downloading it and turning off the internet it still raises an error from the ```requests``` library trying to reach for the online dataset. Is this the intended behavior ? (Sorry, I wrote the the first version of this issue while still on nlp 0.3.0).
ghazi-f
https://github.com/huggingface/datasets/issues/761
null
false
729,637,917
760
Add meta-data to the HANS dataset
closed
[]
2020-10-26T14:56:53
2020-12-03T13:38:34
2020-12-03T13:38:34
The current version of the [HANS dataset](https://github.com/huggingface/datasets/blob/master/datasets/hans/hans.py) is missing the additional information provided for each example, including the sentence parses, heuristic and subcase.
yjernite
https://github.com/huggingface/datasets/issues/760
null
false
729,046,916
759
(Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
closed
[]
2020-10-25T15:34:57
2023-09-13T23:56:51
2021-08-04T18:10:09
Hey, I want to load the cnn-dailymail dataset for fine-tune. I write the code like this from datasets import load_dataset test_dataset = load_dataset(β€œcnn_dailymail”, β€œ3.0.0”, split=β€œtrain”) And I got the following errors. Traceback (most recent call last): File β€œtest.py”, line 7, in test_dataset = load_da...
AI678
https://github.com/huggingface/datasets/issues/759
null
false
728,638,559
758
Process 0 very slow when using num_procs with map to tokenizer
closed
[]
2020-10-24T02:40:20
2020-10-28T03:59:46
2020-10-28T03:59:45
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png"> The code I am using is ``` dataset = load_dataset("text", data_files=[file_path], split='train') dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_speci...
ksjae
https://github.com/huggingface/datasets/issues/758
null
false
728,241,494
757
CUDA out of memory
closed
[]
2020-10-23T13:57:00
2020-12-23T14:06:29
2020-12-23T14:06:29
In your dataset ,cuda run out of memory as long as the trainer begins: however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
li1117heex
https://github.com/huggingface/datasets/issues/757
null
false
728,211,373
756
Start community-provided dataset docs
closed
[]
2020-10-23T13:17:41
2020-10-26T12:55:20
2020-10-26T12:55:19
Continuation of #736 with clean fork. #### Old description This is what I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs. In slack @thomwolf called it a user-...
sshleifer
https://github.com/huggingface/datasets/pull/756
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/756", "html_url": "https://github.com/huggingface/datasets/pull/756", "diff_url": "https://github.com/huggingface/datasets/pull/756.diff", "patch_url": "https://github.com/huggingface/datasets/pull/756.patch", "merged_at": "2020-10-26T12:55:19"...
true
728,203,821
755
Start community-provided dataset docs V2
closed
[]
2020-10-23T13:07:30
2020-10-23T13:15:37
2020-10-23T13:15:37
sshleifer
https://github.com/huggingface/datasets/pull/755
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/755", "html_url": "https://github.com/huggingface/datasets/pull/755", "diff_url": "https://github.com/huggingface/datasets/pull/755.diff", "patch_url": "https://github.com/huggingface/datasets/pull/755.patch", "merged_at": null }
true
727,863,105
754
Use full released xsum dataset
closed
[]
2020-10-23T03:29:49
2021-01-01T03:11:56
2020-10-26T12:56:58
#672 Fix xsum to expand coverage and include IDs Code based on parser from older version of `datasets/xsum/xsum.py` @lhoestq
jbragg
https://github.com/huggingface/datasets/pull/754
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/754", "html_url": "https://github.com/huggingface/datasets/pull/754", "diff_url": "https://github.com/huggingface/datasets/pull/754.diff", "patch_url": "https://github.com/huggingface/datasets/pull/754.patch", "merged_at": "2020-10-26T12:56:58"...
true
727,434,935
753
Fix doc links to viewer
closed
[]
2020-10-22T14:20:16
2020-10-23T08:42:11
2020-10-23T08:42:11
It seems #733 forgot some links in the doc :)
Pierrci
https://github.com/huggingface/datasets/pull/753
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/753", "html_url": "https://github.com/huggingface/datasets/pull/753", "diff_url": "https://github.com/huggingface/datasets/pull/753.diff", "patch_url": "https://github.com/huggingface/datasets/pull/753.patch", "merged_at": "2020-10-23T08:42:11"...
true
726,917,801
752
Clicking on a metric in the search page points to datasets page giving "Missing dataset" warning
closed
[]
2020-10-21T22:56:23
2020-10-22T16:19:42
2020-10-22T16:19:42
Hi! Sorry if this isn't the right place to talk about the website, I just didn't exactly where to write this. Searching a metric in https://huggingface.co/metrics gives the right results but clicking on a metric (E.g ROUGE) points to https://huggingface.co/datasets/rouge. Clicking on a metric without searching point...
ogabrielluiz
https://github.com/huggingface/datasets/issues/752
null
false
726,820,191
751
Error loading ms_marco v2.1 using load_dataset()
closed
[]
2020-10-21T19:54:43
2020-11-05T01:31:57
2020-11-05T01:31:57
Code: `dataset = load_dataset('ms_marco', 'v2.1')` Error: ``` `--------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) <ipython-input-16-34378c057212> in <module>() 9 10 # Downloading and loading a data...
JainSahit
https://github.com/huggingface/datasets/issues/751
null
false
726,589,446
750
load_dataset doesn't include `features` in its hash
closed
[]
2020-10-21T15:16:41
2020-10-29T09:36:01
2020-10-29T09:36:01
It looks like the function `load_dataset` does not include what's passed in the `features` argument when creating a hash for a given dataset. As a result, if a user includes new features from an already downloaded dataset, those are ignored. Example: some models on the hub have a different ordering for the labels t...
sgugger
https://github.com/huggingface/datasets/issues/750
null
false
726,366,062
749
[XGLUE] Adding new dataset
closed
[]
2020-10-21T10:51:36
2022-09-30T11:35:30
2021-01-06T10:02:55
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf). I'm planning on adding the dataset to the library myself in a couple of weeks. Also tagging @JetRunner @qiweizhen in case I need some guidance
patrickvonplaten
https://github.com/huggingface/datasets/issues/749
null
false
726,196,589
748
New version of CompGuessWhat?! with refined annotations
closed
[]
2020-10-21T06:55:41
2020-10-21T08:52:42
2020-10-21T08:46:19
This pull request introduces a few fixes to the annotations for VisualGenome in the CompGuessWhat?! original split.
aleSuglia
https://github.com/huggingface/datasets/pull/748
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/748", "html_url": "https://github.com/huggingface/datasets/pull/748", "diff_url": "https://github.com/huggingface/datasets/pull/748.diff", "patch_url": "https://github.com/huggingface/datasets/pull/748.patch", "merged_at": "2020-10-21T08:46:19"...
true
725,884,704
747
Add Quail question answering dataset
closed
[]
2020-10-20T19:33:14
2020-10-21T08:35:15
2020-10-21T08:35:15
QuAIL is a multi-domain RC dataset featuring news, blogs, fiction and user stories. Each domain is represented by 200 texts, which gives us a 4-way data split. The texts are 300-350 word excerpts from CC-licensed texts that were hand-picked so as to make sense to human readers without larger context. Domain diversit...
sai-prasanna
https://github.com/huggingface/datasets/pull/747
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/747", "html_url": "https://github.com/huggingface/datasets/pull/747", "diff_url": "https://github.com/huggingface/datasets/pull/747.diff", "patch_url": "https://github.com/huggingface/datasets/pull/747.patch", "merged_at": "2020-10-21T08:35:15"...
true
725,627,235
746
dataset(ngt): add ngt dataset initial loading script
closed
[]
2020-10-20T14:04:58
2021-03-23T06:19:38
2021-03-23T06:19:38
Currently only making the paths to the annotation ELAN (eaf) file and videos available. This is the first accessible way to download this dataset, which is not manual file-by-file. Only downloading the necessary files, the annotation files are very small, 20MB for all of them, but the video files are large, 100GB i...
AmitMY
https://github.com/huggingface/datasets/pull/746
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/746", "html_url": "https://github.com/huggingface/datasets/pull/746", "diff_url": "https://github.com/huggingface/datasets/pull/746.diff", "patch_url": "https://github.com/huggingface/datasets/pull/746.patch", "merged_at": null }
true
725,589,352
745
Fix emotion description
closed
[]
2020-10-20T13:28:39
2021-04-22T14:47:31
2020-10-21T08:38:27
Fixes the description of the emotion dataset to reflect the class names observed in the data, not the ones described in the paper. I also took the liberty to make use of `ClassLabel` for the emotion labels.
lewtun
https://github.com/huggingface/datasets/pull/745
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/745", "html_url": "https://github.com/huggingface/datasets/pull/745", "diff_url": "https://github.com/huggingface/datasets/pull/745.diff", "patch_url": "https://github.com/huggingface/datasets/pull/745.patch", "merged_at": "2020-10-21T08:38:27"...
true
724,918,448
744
Dataset Explorer Doesn't Work for squad_es and squad_it
closed
[]
2020-10-19T19:34:12
2020-10-26T16:36:17
2020-10-26T16:36:17
https://huggingface.co/nlp/viewer/?dataset=squad_es https://huggingface.co/nlp/viewer/?dataset=squad_it Both pages show "OSError: [Errno 28] No space left on device".
gaotongxiao
https://github.com/huggingface/datasets/issues/744
null
false
724,703,980
743
load_dataset for CSV files not working
open
[]
2020-10-19T14:53:51
2025-04-24T06:35:25
null
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets. ` from datasets import load_dataset ` ` dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master") ` Displayed error: ` ... ArrowInva...
iliemihai
https://github.com/huggingface/datasets/issues/743
null
false
724,509,974
742
Add OCNLI, a new CLUE dataset
closed
[]
2020-10-19T11:06:33
2020-10-22T16:19:49
2020-10-22T16:19:48
OCNLI stands for Original Chinese Natural Language Inference. It is a corpus for Chinese Natural Language Inference, collected following closely the procedures of MNLI, but with enhanced strategies aiming for more challenging inference pairs. We want to emphasize we did not use hu...
JetRunner
https://github.com/huggingface/datasets/pull/742
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/742", "html_url": "https://github.com/huggingface/datasets/pull/742", "diff_url": "https://github.com/huggingface/datasets/pull/742.diff", "patch_url": "https://github.com/huggingface/datasets/pull/742.patch", "merged_at": "2020-10-22T16:19:47"...
true
723,924,275
741
Creating dataset consumes too much memory
closed
[]
2020-10-18T06:07:06
2022-02-15T17:03:10
2022-02-15T17:03:10
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue. Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400): ```python def _generate_examples(self, base_path, split): """ Yields examp...
AmitMY
https://github.com/huggingface/datasets/issues/741
null
false
723,047,958
740
Fix TREC urls
closed
[]
2020-10-16T09:11:28
2020-10-19T08:54:37
2020-10-19T08:54:36
The old TREC urls are now redirections. I updated the urls to the new ones, since we don't support redirections for downloads. Fix #737
lhoestq
https://github.com/huggingface/datasets/pull/740
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/740", "html_url": "https://github.com/huggingface/datasets/pull/740", "diff_url": "https://github.com/huggingface/datasets/pull/740.diff", "patch_url": "https://github.com/huggingface/datasets/pull/740.patch", "merged_at": "2020-10-19T08:54:35"...
true
723,044,066
739
Add wiki dpr multiset embeddings
closed
[]
2020-10-16T09:05:49
2020-11-26T14:02:50
2020-11-26T14:02:49
There are two DPR encoders, one trained on Natural Questions and one trained on a multiset/hybrid dataset. Previously only the embeddings from the encoder trained on NQ were available. I'm adding the ones from the encoder trained on the multiset/hybrid dataset. In the configuration you can now specify `embeddings_nam...
lhoestq
https://github.com/huggingface/datasets/pull/739
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/739", "html_url": "https://github.com/huggingface/datasets/pull/739", "diff_url": "https://github.com/huggingface/datasets/pull/739.diff", "patch_url": "https://github.com/huggingface/datasets/pull/739.patch", "merged_at": "2020-11-26T14:02:49"...
true
723,033,923
738
Replace seqeval code with original classification_report for simplicity
closed
[]
2020-10-16T08:51:45
2021-01-21T16:07:15
2020-10-19T10:31:12
Recently, the original seqeval has enabled us to get per type scores and overall scores as a dictionary. This PR replaces the current code with the original function(`classification_report`) to simplify it. Also, the original code has been updated to fix #352. - Related issue: https://github.com/chakki-works/seq...
Hironsan
https://github.com/huggingface/datasets/pull/738
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/738", "html_url": "https://github.com/huggingface/datasets/pull/738", "diff_url": "https://github.com/huggingface/datasets/pull/738.diff", "patch_url": "https://github.com/huggingface/datasets/pull/738.patch", "merged_at": "2020-10-19T10:31:11"...
true
722,463,923
737
Trec Dataset Connection Error
closed
[]
2020-10-15T15:57:53
2020-10-19T08:54:36
2020-10-19T08:54:36
**Datasets Version:** 1.1.2 **Python Version:** 3.6/3.7 **Code:** ```python from datasets import load_dataset load_dataset("trec") ``` **Expected behavior:** Download Trec dataset and load Dataset object **Current Behavior:** Get a connection error saying it couldn't reach http://cogcomp.org/Data/...
aychang95
https://github.com/huggingface/datasets/issues/737
null
false
722,348,191
736
Start community-provided dataset docs
closed
[]
2020-10-15T13:41:39
2020-10-23T13:15:28
2020-10-23T13:15:28
This is one I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs. + In slack @thomwolf called it a `user-namespace` dataset, but the docs call it `community dataset`...
sshleifer
https://github.com/huggingface/datasets/pull/736
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/736", "html_url": "https://github.com/huggingface/datasets/pull/736", "diff_url": "https://github.com/huggingface/datasets/pull/736.diff", "patch_url": "https://github.com/huggingface/datasets/pull/736.patch", "merged_at": null }
true
722,225,270
735
Throw error when an unexpected key is used in data_files
closed
[]
2020-10-15T10:55:27
2020-10-30T13:23:52
2020-10-30T13:23:52
I have found that only "train", "validation" and "test" are valid keys in the `data_files` argument. When you use any other ones, those attached files are silently ignored - leading to unexpected behaviour for the users. So the following, unintuitively, returns only one key (namely `train`). ```python datasets =...
BramVanroy
https://github.com/huggingface/datasets/issues/735
null
false
721,767,848
734
Fix GLUE metric description
closed
[]
2020-10-14T20:44:14
2020-10-15T09:27:43
2020-10-15T09:27:42
Small typo: the description says translation instead of prediction.
sgugger
https://github.com/huggingface/datasets/pull/734
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/734", "html_url": "https://github.com/huggingface/datasets/pull/734", "diff_url": "https://github.com/huggingface/datasets/pull/734.diff", "patch_url": "https://github.com/huggingface/datasets/pull/734.patch", "merged_at": "2020-10-15T09:27:42"...
true
721,366,744
733
Update link to dataset viewer
closed
[]
2020-10-14T11:13:23
2020-10-14T14:07:31
2020-10-14T14:07:31
Change 404 error links in quick tour to working ones
negedng
https://github.com/huggingface/datasets/pull/733
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/733", "html_url": "https://github.com/huggingface/datasets/pull/733", "diff_url": "https://github.com/huggingface/datasets/pull/733.diff", "patch_url": "https://github.com/huggingface/datasets/pull/733.patch", "merged_at": "2020-10-14T14:07:31"...
true
721,359,448
732
dataset(wlasl): initial loading script
closed
[]
2020-10-14T11:01:42
2021-03-23T06:19:43
2021-03-23T06:19:43
takes like 9-10 hours to download all of the videos for the dataset, but it does finish :)
AmitMY
https://github.com/huggingface/datasets/pull/732
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/732", "html_url": "https://github.com/huggingface/datasets/pull/732", "diff_url": "https://github.com/huggingface/datasets/pull/732.diff", "patch_url": "https://github.com/huggingface/datasets/pull/732.patch", "merged_at": null }
true
721,142,985
731
dataset(aslg_pc12): initial loading script
closed
[]
2020-10-14T05:14:37
2020-10-28T15:27:06
2020-10-28T15:27:06
This contains the only current public part of this corpus. The rest of the corpus is not yet been made public, but this sample is still being used by researchers.
AmitMY
https://github.com/huggingface/datasets/pull/731
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/731", "html_url": "https://github.com/huggingface/datasets/pull/731", "diff_url": "https://github.com/huggingface/datasets/pull/731.diff", "patch_url": "https://github.com/huggingface/datasets/pull/731.patch", "merged_at": "2020-10-28T15:27:06"...
true
721,073,812
730
Possible caching bug
closed
[]
2020-10-14T02:02:34
2022-11-22T01:45:54
2020-10-29T09:36:01
The following code with `test1.txt` containing just "πŸ€—πŸ€—πŸ€—": ``` dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1") print(dataset[0]) dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8") print(dataset[0]) ``` produc...
ArneBinder
https://github.com/huggingface/datasets/issues/730
null
false
719,558,876
729
Better error message when one forgets to call `add_batch` before `compute`
closed
[]
2020-10-12T17:59:22
2020-10-29T15:18:24
2020-10-29T15:18:24
When using metrics, if for some reason a user forgets to call `add_batch` to a metric before `compute` (with no arguments), the error message is a bit cryptic and could probably be made clearer. ## Reproducer ```python import datasets import torch from datasets import Metric class GatherMetric(Metric): ...
sgugger
https://github.com/huggingface/datasets/issues/729
null
false
719,555,780
728
Passing `cache_dir` to a metric does not work
closed
[]
2020-10-12T17:55:14
2020-10-29T09:34:42
2020-10-29T09:34:42
When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError: ## Reproducer ```python import datasets import torch from datasets import Metric class GatherMetric(Metric): def _info(self): return datasets.MetricInfo( ...
sgugger
https://github.com/huggingface/datasets/issues/728
null
false
719,386,366
727
Parallel downloads progress bar flickers
open
[]
2020-10-12T13:36:05
2020-10-12T13:36:05
null
When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line. To fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar. Another way would be to have one "...
lhoestq
https://github.com/huggingface/datasets/issues/727
null
false
719,313,754
726
"Checksums didn't match for dataset source files" error while loading openwebtext dataset
closed
[]
2020-10-12T11:45:10
2022-02-17T17:53:54
2022-02-15T10:38:57
Hi, I have encountered this problem during loading the openwebtext dataset: ``` >>> dataset = load_dataset('openwebtext') Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op...
SparkJiao
https://github.com/huggingface/datasets/issues/726
null
false
718,985,641
725
pretty print dataset objects
closed
[]
2020-10-12T02:03:46
2020-10-23T16:24:35
2020-10-23T09:00:46
Currently, if I do: ``` from datasets import load_dataset load_dataset("wikihow", 'all', data_dir="/hf/pegasus-datasets/wikihow/") ``` I get: ``` DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None), 'headline': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None...
stas00
https://github.com/huggingface/datasets/pull/725
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/725", "html_url": "https://github.com/huggingface/datasets/pull/725", "diff_url": "https://github.com/huggingface/datasets/pull/725.diff", "patch_url": "https://github.com/huggingface/datasets/pull/725.patch", "merged_at": "2020-10-23T09:00:46"...
true