id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
โŒ€
body
stringlengths
0
228k
โŒ€
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
627,235,893
219
force mwparserfromhell as third party
closed
[]
2020-05-29T12:33:17
2020-05-29T13:30:13
2020-05-29T13:30:12
This should fix your env because you had `mwparserfromhell ` as a first party for `isort` @patrickvonplaten
lhoestq
https://github.com/huggingface/datasets/pull/219
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/219", "html_url": "https://github.com/huggingface/datasets/pull/219", "diff_url": "https://github.com/huggingface/datasets/pull/219.diff", "patch_url": "https://github.com/huggingface/datasets/pull/219.patch", "merged_at": "2020-05-29T13:30:12"...
true
627,173,407
218
Add Natual Questions and C4 scripts
closed
[]
2020-05-29T10:40:30
2020-05-29T12:31:01
2020-05-29T12:31:00
Scripts are ready ! However they are not processed nor directly available from gcp yet.
lhoestq
https://github.com/huggingface/datasets/pull/218
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/218", "html_url": "https://github.com/huggingface/datasets/pull/218", "diff_url": "https://github.com/huggingface/datasets/pull/218.diff", "patch_url": "https://github.com/huggingface/datasets/pull/218.patch", "merged_at": "2020-05-29T12:31:00"...
true
627,128,403
217
Multi-task dataset mixing
open
[]
2020-05-29T09:22:26
2025-09-24T08:59:38
null
It seems like many of the best performing models on the GLUE benchmark make some use of multitask learning (simultaneous training on multiple tasks). The [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) highlights multiple ways of mixing the tasks together during finetuning: - **Examples-proportional mixing** - sam...
ghomasHudson
https://github.com/huggingface/datasets/issues/217
null
false
626,896,890
216
โ“ How to get ROUGE-2 with the ROUGE metric ?
closed
[]
2020-05-28T23:47:32
2020-06-01T00:04:35
2020-06-01T00:04:35
I'm trying to use ROUGE metric, but I don't know how to get the ROUGE-2 metric. --- I compute scores with : ```python import nlp rouge = nlp.load_metric('rouge') with open("pred.txt") as p, open("ref.txt") as g: for lp, lg in zip(p, g): rouge.add([lp], [lg]) score = rouge.compute() ``` ...
astariul
https://github.com/huggingface/datasets/issues/216
null
false
626,867,879
215
NonMatchingSplitsSizesError when loading blog_authorship_corpus
closed
[]
2020-05-28T22:55:19
2025-01-04T00:03:12
2022-02-10T13:05:45
Getting this error when i run `nlp.load_dataset('blog_authorship_corpus')`. ``` raise NonMatchingSplitsSizesError(str(bad_splits)) nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded...
cedricconol
https://github.com/huggingface/datasets/issues/215
null
false
626,641,549
214
[arrow_dataset.py] add new filter function
closed
[]
2020-05-28T16:21:40
2020-05-29T11:43:29
2020-05-29T11:32:20
The `.map()` function is super useful, but can IMO a bit tedious when filtering certain examples. I think, filtering out examples is also a very common operation people would like to perform on datasets. This PR is a proposal to add a `.filter()` function in the same spirit than the `.map()` function. Here is a ...
patrickvonplaten
https://github.com/huggingface/datasets/pull/214
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/214", "html_url": "https://github.com/huggingface/datasets/pull/214", "diff_url": "https://github.com/huggingface/datasets/pull/214.diff", "patch_url": "https://github.com/huggingface/datasets/pull/214.patch", "merged_at": "2020-05-29T11:32:20"...
true
626,587,995
213
better message if missing beam options
closed
[]
2020-05-28T15:06:57
2020-05-29T09:51:17
2020-05-29T09:51:16
WDYT @yjernite ? For example: ```python dataset = nlp.load_dataset('wikipedia', '20200501.aa') ``` Raises: ``` MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to ru...
lhoestq
https://github.com/huggingface/datasets/pull/213
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/213", "html_url": "https://github.com/huggingface/datasets/pull/213", "diff_url": "https://github.com/huggingface/datasets/pull/213.diff", "patch_url": "https://github.com/huggingface/datasets/pull/213.patch", "merged_at": "2020-05-29T09:51:16"...
true
626,580,198
212
have 'add' and 'add_batch' for metrics
closed
[]
2020-05-28T14:56:47
2020-05-29T10:41:05
2020-05-29T10:41:04
This should fix #116 Previously the `.add` method of metrics expected a batch of examples. Now `.add` expects one prediction/reference and `.add_batch` expects a batch. I think it is more coherent with the way the ArrowWriter works.
lhoestq
https://github.com/huggingface/datasets/pull/212
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/212", "html_url": "https://github.com/huggingface/datasets/pull/212", "diff_url": "https://github.com/huggingface/datasets/pull/212.diff", "patch_url": "https://github.com/huggingface/datasets/pull/212.patch", "merged_at": "2020-05-29T10:41:04"...
true
626,565,994
211
[Arrow writer, Trivia_qa] Could not convert TagMe with type str: converting to null type
closed
[]
2020-05-28T14:38:14
2020-07-23T10:15:16
2020-07-23T10:15:16
Running the following code ``` import nlp ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it's cached afterwards... ds.map(lambda x: x, load_from_cache_file=False) ``` triggers a `ArrowInvalid: Could not convert TagMe with type str: converting to n...
patrickvonplaten
https://github.com/huggingface/datasets/issues/211
null
false
626,504,243
210
fix xnli metric kwargs description
closed
[]
2020-05-28T13:21:44
2020-05-28T13:22:11
2020-05-28T13:22:10
The text was wrong as noticed in #202
lhoestq
https://github.com/huggingface/datasets/pull/210
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/210", "html_url": "https://github.com/huggingface/datasets/pull/210", "diff_url": "https://github.com/huggingface/datasets/pull/210.diff", "patch_url": "https://github.com/huggingface/datasets/pull/210.patch", "merged_at": "2020-05-28T13:22:10"...
true
626,405,849
209
Add a Google Drive exception for small files
closed
[]
2020-05-28T10:40:17
2020-05-28T15:15:04
2020-05-28T15:15:04
I tried to use the ``nlp`` library to load personnal datasets. I mainly copy-paste the code for ``multi-news`` dataset because my files are stored on Google Drive. One of my dataset is small (< 25Mo) so it can be verified by Drive without asking the authorization to the user. This makes the download starts directly...
airKlizz
https://github.com/huggingface/datasets/pull/209
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/209", "html_url": "https://github.com/huggingface/datasets/pull/209", "diff_url": "https://github.com/huggingface/datasets/pull/209.diff", "patch_url": "https://github.com/huggingface/datasets/pull/209.patch", "merged_at": "2020-05-28T15:15:04"...
true
626,398,519
208
[Dummy data] insert config name instead of config
closed
[]
2020-05-28T10:28:19
2020-05-28T12:48:01
2020-05-28T12:48:00
Thanks @yjernite for letting me know. in the dummy data command the config name shuold be passed to the dataset builder and not the config itself. Also, @lhoestq fixed small import bug introduced by beam command I think.
patrickvonplaten
https://github.com/huggingface/datasets/pull/208
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/208", "html_url": "https://github.com/huggingface/datasets/pull/208", "diff_url": "https://github.com/huggingface/datasets/pull/208.diff", "patch_url": "https://github.com/huggingface/datasets/pull/208.patch", "merged_at": "2020-05-28T12:48:00"...
true
625,932,200
207
Remove test set from NLP viewer
closed
[]
2020-05-27T18:32:07
2022-02-10T13:17:45
2022-02-10T13:17:45
While the new [NLP viewer](https://huggingface.co/nlp/viewer/) is a great tool, I think it would be best to outright remove the option of looking at the test sets. At the very least, a warning should be displayed to users before showing the test set. Newcomers to the field might not be aware of best practices, and smal...
chrisdonahue
https://github.com/huggingface/datasets/issues/207
null
false
625,842,989
206
[Question] Combine 2 datasets which have the same columns
closed
[]
2020-05-27T16:25:52
2020-06-10T09:11:14
2020-06-10T09:11:14
Hi, I am using ``nlp`` to load personal datasets. I created summarization datasets in multi-languages based on wikinews. I have one dataset for english and one for german (french is getting to be ready as well). I want to keep these datasets independent because they need different pre-processing (add different task-...
airKlizz
https://github.com/huggingface/datasets/issues/206
null
false
625,839,335
205
Better arrow dataset iter
closed
[]
2020-05-27T16:20:21
2020-05-27T16:39:58
2020-05-27T16:39:56
I tried to play around with `tf.data.Dataset.from_generator` and I found out that the `__iter__` that we have for `nlp.arrow_dataset.Dataset` ignores the format that has been set (torch or tensorflow). With these changes I should be able to come up with a `tf.data.Dataset` that uses lazy loading, as asked in #193.
lhoestq
https://github.com/huggingface/datasets/pull/205
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/205", "html_url": "https://github.com/huggingface/datasets/pull/205", "diff_url": "https://github.com/huggingface/datasets/pull/205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/205.patch", "merged_at": "2020-05-27T16:39:56"...
true
625,655,849
204
Add Dataflow support + Wikipedia + Wiki40b
closed
[]
2020-05-27T12:32:49
2020-05-28T08:10:35
2020-05-28T08:10:34
# Add Dataflow support + Wikipedia + Wiki40b ## Support datasets processing with Apache Beam Some datasets are too big to be processed on a single machine, for example: wikipedia, wiki40b, etc. Apache Beam allows to process datasets on many execution engines like Dataflow, Spark, Flink, etc. To process such da...
lhoestq
https://github.com/huggingface/datasets/pull/204
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/204", "html_url": "https://github.com/huggingface/datasets/pull/204", "diff_url": "https://github.com/huggingface/datasets/pull/204.diff", "patch_url": "https://github.com/huggingface/datasets/pull/204.patch", "merged_at": "2020-05-28T08:10:34"...
true
625,515,488
203
Raise an error if no config name for datasets like glue
closed
[]
2020-05-27T09:03:58
2020-05-27T16:40:39
2020-05-27T16:40:38
Some datasets like glue (see #130) and scientific_papers (see #197) have many configs. For example for glue there are cola, sst2, mrpc etc. Currently if a user does `load_dataset('glue')`, then Cola is loaded by default and it can be confusing. Instead, we should raise an error to let the user know that he has to p...
lhoestq
https://github.com/huggingface/datasets/pull/203
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/203", "html_url": "https://github.com/huggingface/datasets/pull/203", "diff_url": "https://github.com/huggingface/datasets/pull/203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/203.patch", "merged_at": "2020-05-27T16:40:38"...
true
625,493,983
202
Mistaken `_KWARGS_DESCRIPTION` for XNLI metric
closed
[]
2020-05-27T08:34:42
2020-05-28T13:22:36
2020-05-28T13:22:36
Hi! The [`_KWARGS_DESCRIPTION`](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/xnli/xnli.py#L45) for the XNLI metric uses `Args` and `Returns` text from [BLEU](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/bleu/bleu.py#L58) metric: ...
phiyodr
https://github.com/huggingface/datasets/issues/202
null
false
625,235,430
201
Fix typo in README
closed
[]
2020-05-26T22:18:21
2020-05-26T23:40:31
2020-05-26T23:00:56
LysandreJik
https://github.com/huggingface/datasets/pull/201
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/201", "html_url": "https://github.com/huggingface/datasets/pull/201", "diff_url": "https://github.com/huggingface/datasets/pull/201.diff", "patch_url": "https://github.com/huggingface/datasets/pull/201.patch", "merged_at": "2020-05-26T23:00:56"...
true
625,226,638
200
[ArrowWriter] Set schema at first write example
closed
[]
2020-05-26T21:59:48
2020-05-27T09:07:54
2020-05-27T09:07:53
Right now if the schema was not specified when instantiating `ArrowWriter`, then it could be set with the first `write_table` for example (it calls `self._build_writer()` to do so). I noticed that it was not done if the first example is added via `.write`, so I added it for coherence.
lhoestq
https://github.com/huggingface/datasets/pull/200
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/200", "html_url": "https://github.com/huggingface/datasets/pull/200", "diff_url": "https://github.com/huggingface/datasets/pull/200.diff", "patch_url": "https://github.com/huggingface/datasets/pull/200.patch", "merged_at": "2020-05-27T09:07:53"...
true
625,217,440
199
Fix GermEval 2014 dataset infos
closed
[]
2020-05-26T21:41:44
2020-05-26T21:50:24
2020-05-26T21:50:24
Hi, this PR just removes the `dataset_info.json` file and adds a newly generated `dataset_infos.json` file.
stefan-it
https://github.com/huggingface/datasets/pull/199
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/199", "html_url": "https://github.com/huggingface/datasets/pull/199", "diff_url": "https://github.com/huggingface/datasets/pull/199.diff", "patch_url": "https://github.com/huggingface/datasets/pull/199.patch", "merged_at": "2020-05-26T21:50:24"...
true
625,200,627
198
Index outside of table length
closed
[]
2020-05-26T21:09:40
2020-05-26T22:43:49
2020-05-26T22:43:49
The offset input box warns of numbers larger than a limit (like 2000) but then the errors start at a smaller value than that limit (like 1955). > ValueError: Index (2000) outside of table length (2000). > Traceback: > File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _ru...
casajarm
https://github.com/huggingface/datasets/issues/198
null
false
624,966,904
197
Scientific Papers only downloading Pubmed
closed
[]
2020-05-26T15:18:47
2020-05-28T08:19:28
2020-05-28T08:19:28
Hi! I have been playing around with this module, and I am a bit confused about the `scientific_papers` dataset. I thought that it would download two separate datasets, arxiv and pubmed. But when I run the following: ``` dataset = nlp.load_dataset('scientific_papers', data_dir='.', cache_dir='.') Downloading: 10...
antmarakis
https://github.com/huggingface/datasets/issues/197
null
false
624,901,266
196
Check invalid config name
closed
[]
2020-05-26T13:52:51
2020-05-26T21:04:56
2020-05-26T21:04:55
As said in #194, we should raise an error if the config name has bad characters. Bad characters are those that are not allowed for directory names on windows.
lhoestq
https://github.com/huggingface/datasets/pull/196
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/196", "html_url": "https://github.com/huggingface/datasets/pull/196", "diff_url": "https://github.com/huggingface/datasets/pull/196.diff", "patch_url": "https://github.com/huggingface/datasets/pull/196.patch", "merged_at": "2020-05-26T21:04:55"...
true
624,858,686
195
[Dummy data command] add new case to command
closed
[]
2020-05-26T12:50:47
2020-05-26T14:38:28
2020-05-26T14:38:27
Qanta: #194 introduces a case that was not noticed before. This change in code helps community users to have an easier time creating the dummy data.
patrickvonplaten
https://github.com/huggingface/datasets/pull/195
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/195", "html_url": "https://github.com/huggingface/datasets/pull/195", "diff_url": "https://github.com/huggingface/datasets/pull/195.diff", "patch_url": "https://github.com/huggingface/datasets/pull/195.patch", "merged_at": "2020-05-26T14:38:27"...
true
624,854,897
194
Add Dataset: Qanta
closed
[]
2020-05-26T12:44:35
2020-05-26T16:58:17
2020-05-26T13:16:20
Fixes dummy data for #169 @EntilZha
patrickvonplaten
https://github.com/huggingface/datasets/pull/194
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/194", "html_url": "https://github.com/huggingface/datasets/pull/194", "diff_url": "https://github.com/huggingface/datasets/pull/194.diff", "patch_url": "https://github.com/huggingface/datasets/pull/194.patch", "merged_at": "2020-05-26T13:16:20"...
true
624,655,558
193
[Tensorflow] Use something else than `from_tensor_slices()`
closed
[]
2020-05-26T07:19:14
2020-10-27T15:28:11
2020-10-27T15:28:11
In the example notebook, the TF Dataset is built using `from_tensor_slices()` : ```python columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions'] train_tf_dataset.set_format(type='tensorflow', columns=columns) features = {x: train_tf_dataset[x] for x in columns[:3]} label...
astariul
https://github.com/huggingface/datasets/issues/193
null
false
624,397,592
192
[Question] Create Apache Arrow dataset from raw text file
closed
[]
2020-05-25T16:42:47
2021-12-18T01:45:34
2020-10-27T15:20:22
Hi guys, I have gathered and preprocessed about 2GB of COVID papers from CORD dataset @ Kggle. I have seen you have a text dataset as "Crime and punishment" in Apache arrow format. Do you have any script to do it from a raw txt file (preprocessed as for BERT like) or any guide? Is the worth of send it to you and add i...
mrm8488
https://github.com/huggingface/datasets/issues/192
null
false
624,394,936
191
[Squad es] add dataset_infos
closed
[]
2020-05-25T16:35:52
2020-05-25T16:39:59
2020-05-25T16:39:58
@mariamabarham - was still about to upload this. Should have waited with my comment a bit more :D
patrickvonplaten
https://github.com/huggingface/datasets/pull/191
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/191", "html_url": "https://github.com/huggingface/datasets/pull/191", "diff_url": "https://github.com/huggingface/datasets/pull/191.diff", "patch_url": "https://github.com/huggingface/datasets/pull/191.patch", "merged_at": "2020-05-25T16:39:58"...
true
624,124,600
190
add squad Spanish v1 and v2
closed
[]
2020-05-25T08:08:40
2020-05-25T16:28:46
2020-05-25T16:28:45
This PR add the Spanish Squad versions 1 and 2 datasets. Fixes #164
mariamabarham
https://github.com/huggingface/datasets/pull/190
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/190", "html_url": "https://github.com/huggingface/datasets/pull/190", "diff_url": "https://github.com/huggingface/datasets/pull/190.diff", "patch_url": "https://github.com/huggingface/datasets/pull/190.patch", "merged_at": "2020-05-25T16:28:45"...
true
624,048,881
189
[Question] BERT-style multiple choice formatting
closed
[]
2020-05-25T05:11:05
2020-05-25T18:38:28
2020-05-25T18:38:28
Hello, I am wondering what the equivalent formatting of a dataset should be to allow for multiple-choice answering prediction, BERT-style. Previously, this was done by passing a list of `InputFeatures` to the dataloader instead of a list of `InputFeature`, where `InputFeatures` contained lists of length equal to the nu...
sarahwie
https://github.com/huggingface/datasets/issues/189
null
false
623,890,430
188
When will the remaining math_dataset modules be added as dataset objects
closed
[]
2020-05-24T15:46:52
2020-05-24T18:53:48
2020-05-24T18:53:48
Currently only the algebra_linear_1d is supported. Is there a timeline for making the other modules supported. If no timeline is established, how can I help?
tylerroost
https://github.com/huggingface/datasets/issues/188
null
false
623,627,800
187
[Question] How to load wikipedia ? Beam runner ?
closed
[]
2020-05-23T10:18:52
2020-05-25T00:12:02
2020-05-25T00:12:02
When `nlp.load_dataset('wikipedia')`, I got * `WARNING:nlp.builder:Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided. Please pass a nlp.DownloadConfig(beam_runner=...) object to the builder.download_and_prepare(download_config=...) method. Default values will be ...
richarddwang
https://github.com/huggingface/datasets/issues/187
null
false
623,595,180
186
Weird-ish: Not creating unique caches for different phases
closed
[]
2020-05-23T06:40:58
2020-05-23T20:22:18
2020-05-23T20:22:17
Sample code: ```python import nlp dataset = nlp.load_dataset('boolq') def func1(x): return x def func2(x): return None train_output = dataset["train"].map(func1) valid_output = dataset["validation"].map(func1) print() print(len(train_output), len(valid_output)) # Output: 9427 9427 ``` Th...
zphang
https://github.com/huggingface/datasets/issues/186
null
false
623,172,484
185
[Commands] In-detail instructions to create dummy data folder
closed
[]
2020-05-22T12:26:25
2020-05-22T14:06:35
2020-05-22T14:06:34
### Dummy data command This PR adds a new command `python nlp-cli dummy_data <path_to_dataset_folder>` that gives in-detail instructions on how to add the dummy data files. It would be great if you can try it out by moving the current dummy_data folder of any dataset in `./datasets` with `mv datasets/<dataset_s...
patrickvonplaten
https://github.com/huggingface/datasets/pull/185
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/185", "html_url": "https://github.com/huggingface/datasets/pull/185", "diff_url": "https://github.com/huggingface/datasets/pull/185.diff", "patch_url": "https://github.com/huggingface/datasets/pull/185.patch", "merged_at": "2020-05-22T14:06:34"...
true
623,120,929
184
Use IndexError instead of ValueError when index out of range
closed
[]
2020-05-22T10:43:42
2020-05-28T08:31:18
2020-05-28T08:31:18
**`default __iter__ needs IndexError`**. When I want to create a wrapper of arrow dataset to adapt to fastai, I don't know how to initialize it, so I didn't use inheritance but use object composition. I wrote sth like this. ``` clas HF_dataset(): def __init__(self, arrow_dataset): self.dset = arrow_datas...
richarddwang
https://github.com/huggingface/datasets/pull/184
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/184", "html_url": "https://github.com/huggingface/datasets/pull/184", "diff_url": "https://github.com/huggingface/datasets/pull/184.diff", "patch_url": "https://github.com/huggingface/datasets/pull/184.patch", "merged_at": "2020-05-28T08:31:18"...
true
623,054,270
183
[Bug] labels of glue/ax are all -1
closed
[]
2020-05-22T08:43:36
2020-05-22T22:14:05
2020-05-22T22:14:05
``` ax = nlp.load_dataset('glue', 'ax') for i in range(30): print(ax['test'][i]['label'], end=', ') ``` ``` -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, ```
richarddwang
https://github.com/huggingface/datasets/issues/183
null
false
622,646,770
182
Update newsroom.py
closed
[]
2020-05-21T17:07:43
2020-05-22T16:38:23
2020-05-22T16:38:23
Updated the URL for Newsroom download so it's more robust to future changes.
yoavartzi
https://github.com/huggingface/datasets/pull/182
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/182", "html_url": "https://github.com/huggingface/datasets/pull/182", "diff_url": "https://github.com/huggingface/datasets/pull/182.diff", "patch_url": "https://github.com/huggingface/datasets/pull/182.patch", "merged_at": "2020-05-22T16:38:23"...
true
622,634,420
181
Cannot upload my own dataset
closed
[]
2020-05-21T16:45:52
2020-06-18T22:14:42
2020-06-18T22:14:42
I look into `nlp-cli` and `user.py` to learn how to upload my own data. It is supposed to work like this - Register to get username, password at huggingface.co - `nlp-cli login` and type username, passworld - I have a single file to upload at `./ttc/ttc_freq_extra.csv` - `nlp-cli upload ttc/ttc_freq_extra.csv` ...
korakot
https://github.com/huggingface/datasets/issues/181
null
false
622,556,861
180
Add hall of fame
closed
[]
2020-05-21T14:53:48
2020-05-22T16:35:16
2020-05-22T16:35:14
powered by https://github.com/sourcerer-io/hall-of-fame
clmnt
https://github.com/huggingface/datasets/pull/180
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/180", "html_url": "https://github.com/huggingface/datasets/pull/180", "diff_url": "https://github.com/huggingface/datasets/pull/180.diff", "patch_url": "https://github.com/huggingface/datasets/pull/180.patch", "merged_at": "2020-05-22T16:35:14"...
true
622,525,410
179
[Feature request] separate split name and split instructions
closed
[]
2020-05-21T14:10:51
2020-05-22T13:31:08
2020-05-22T13:31:07
Currently, the name of an nlp.NamedSplit is parsed in arrow_reader.py and used as the instruction. This makes it impossible to have several training sets, which can occur when: - A dataset corresponds to a collection of sub-datasets - A dataset was built in stages, adding new examples at each stage Would it be ...
yjernite
https://github.com/huggingface/datasets/issues/179
null
false
621,979,849
178
[Manual data] improve error message for manual data in general
closed
[]
2020-05-20T18:10:45
2020-05-20T18:18:52
2020-05-20T18:18:50
`nlp.load("xsum")` now leads to the following error message: ![Screenshot from 2020-05-20 20-05-28](https://user-images.githubusercontent.com/23423619/82481825-3587ea00-9ad6-11ea-9ca2-5794252c6ac7.png) I guess the manual download instructions for `xsum` can also be improved.
patrickvonplaten
https://github.com/huggingface/datasets/pull/178
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/178", "html_url": "https://github.com/huggingface/datasets/pull/178", "diff_url": "https://github.com/huggingface/datasets/pull/178.diff", "patch_url": "https://github.com/huggingface/datasets/pull/178.patch", "merged_at": "2020-05-20T18:18:50"...
true
621,975,368
177
Xsum manual download instruction
closed
[]
2020-05-20T18:02:41
2020-05-20T18:16:50
2020-05-20T18:16:49
mariamabarham
https://github.com/huggingface/datasets/pull/177
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/177", "html_url": "https://github.com/huggingface/datasets/pull/177", "diff_url": "https://github.com/huggingface/datasets/pull/177.diff", "patch_url": "https://github.com/huggingface/datasets/pull/177.patch", "merged_at": "2020-05-20T18:16:49"...
true
621,934,638
176
[Tests] Refactor MockDownloadManager
closed
[]
2020-05-20T17:07:36
2020-05-20T18:17:19
2020-05-20T18:17:18
Clean mock download manager class. The print function was not of much help I think. We should think about adding a command that creates the dummy folder structure for the user.
patrickvonplaten
https://github.com/huggingface/datasets/pull/176
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/176", "html_url": "https://github.com/huggingface/datasets/pull/176", "diff_url": "https://github.com/huggingface/datasets/pull/176.diff", "patch_url": "https://github.com/huggingface/datasets/pull/176.patch", "merged_at": "2020-05-20T18:17:18"...
true
621,929,428
175
[Manual data dir] Error message: nlp.load_dataset('xsum') -> TypeError
closed
[]
2020-05-20T17:00:32
2020-05-20T18:18:50
2020-05-20T18:18:50
v 0.1.0 from pip ```python import nlp xsum = nlp.load_dataset('xsum') ``` Issue is `dl_manager.manual_dir`is `None` ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-42-8a32f06...
sshleifer
https://github.com/huggingface/datasets/issues/175
null
false
621,928,403
174
nlp.load_dataset('xsum') -> TypeError
closed
[]
2020-05-20T16:59:09
2020-05-20T17:43:46
2020-05-20T17:43:46
sshleifer
https://github.com/huggingface/datasets/issues/174
null
false
621,764,932
173
Rm extracted test dirs
closed
[]
2020-05-20T13:30:48
2020-05-22T16:34:36
2020-05-22T16:34:35
All the dummy data used for tests were duplicated. For each dataset, we had one zip file but also its extracted directory. I removed all these directories Furthermore instead of extracting next to the dummy_data.zip file, we extract in the temp `cached_dir` used for tests, so that all the extracted directories get r...
lhoestq
https://github.com/huggingface/datasets/pull/173
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/173", "html_url": "https://github.com/huggingface/datasets/pull/173", "diff_url": "https://github.com/huggingface/datasets/pull/173.diff", "patch_url": "https://github.com/huggingface/datasets/pull/173.patch", "merged_at": "2020-05-22T16:34:35"...
true
621,377,386
172
Clone not working on Windows environment
closed
[]
2020-05-20T00:45:14
2020-05-23T12:49:13
2020-05-23T11:27:52
Cloning in a windows environment is not working because of use of special character '?' in folder name .. Please consider changing the folder name .... Reference to folder - nlp/datasets/cnn_dailymail/dummy/3.0.0/3.0.0/dummy_data-zip-extracted/dummy_data/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs/dailymail/s...
codehunk628
https://github.com/huggingface/datasets/issues/172
null
false
621,199,128
171
fix squad metric format
closed
[]
2020-05-19T18:37:36
2020-05-22T13:36:50
2020-05-22T13:36:48
The format of the squad metric was wrong. This should fix #143 I tested with ```python3 predictions = [ {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'} ] references = [ {'answers': [{'text': 'Denver Broncos'}], 'id': '56be4db0acb8001400a502ec'} ] ```
lhoestq
https://github.com/huggingface/datasets/pull/171
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/171", "html_url": "https://github.com/huggingface/datasets/pull/171", "diff_url": "https://github.com/huggingface/datasets/pull/171.diff", "patch_url": "https://github.com/huggingface/datasets/pull/171.patch", "merged_at": "2020-05-22T13:36:48"...
true
621,119,747
170
Rename anli dataset
closed
[]
2020-05-19T16:26:57
2020-05-20T12:23:09
2020-05-20T12:23:08
What we have now as the `anli` dataset is actually the ฮฑNLI dataset from the ART challenge dataset. This name is confusing because `anli` is also the name of adversarial NLI (see [https://github.com/facebookresearch/anli](https://github.com/facebookresearch/anli)). I renamed the current `anli` dataset by `art`.
lhoestq
https://github.com/huggingface/datasets/pull/170
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/170", "html_url": "https://github.com/huggingface/datasets/pull/170", "diff_url": "https://github.com/huggingface/datasets/pull/170.diff", "patch_url": "https://github.com/huggingface/datasets/pull/170.patch", "merged_at": "2020-05-20T12:23:07"...
true
621,099,682
169
Adding Qanta (Quizbowl) Dataset
closed
[]
2020-05-19T16:03:01
2020-05-26T12:52:31
2020-05-26T12:52:31
This PR adds the qanta question answering datasets from [Quizbowl: The Case for Incremental Question Answering](https://arxiv.org/abs/1904.04792) and [Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples](https://www.aclweb.org/anthology/Q19-1029/) (adversarial fold) This part...
EntilZha
https://github.com/huggingface/datasets/pull/169
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/169", "html_url": "https://github.com/huggingface/datasets/pull/169", "diff_url": "https://github.com/huggingface/datasets/pull/169.diff", "patch_url": "https://github.com/huggingface/datasets/pull/169.patch", "merged_at": null }
true
620,959,819
168
Loading 'wikitext' dataset fails
closed
[]
2020-05-19T13:04:29
2020-05-26T21:46:52
2020-05-26T21:46:52
Loading the 'wikitext' dataset fails with Attribute error: Code to reproduce (From example notebook): import nlp wikitext_dataset = nlp.load_dataset('wikitext') Error: --------------------------------------------------------------------------- AttributeError Traceback (most rece...
itay1itzhak
https://github.com/huggingface/datasets/issues/168
null
false
620,908,786
167
[Tests] refactor tests
closed
[]
2020-05-19T11:43:32
2020-05-19T16:17:12
2020-05-19T16:17:10
This PR separates AWS and Local tests to remove these ugly statements in the script: ```python if "/" not in dataset_name: logging.info("Skip {} because it is a canonical dataset") return ``` To run a `aws` test, one should now run the following command: ```python pytest -s...
patrickvonplaten
https://github.com/huggingface/datasets/pull/167
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/167", "html_url": "https://github.com/huggingface/datasets/pull/167", "diff_url": "https://github.com/huggingface/datasets/pull/167.diff", "patch_url": "https://github.com/huggingface/datasets/pull/167.patch", "merged_at": "2020-05-19T16:17:10"...
true
620,850,218
166
Add a method to shuffle a dataset
closed
[]
2020-05-19T10:08:46
2020-06-23T15:07:33
2020-06-23T15:07:32
Could maybe be a `dataset.shuffle(generator=None, seed=None)` signature method. Also, we could maybe have a clear indication of which method modify in-place and which methods return/cache a modified dataset. I kinda like torch conversion of having an underscore suffix for all the methods which modify a dataset in-pl...
thomwolf
https://github.com/huggingface/datasets/issues/166
null
false
620,758,221
165
ANLI
closed
[]
2020-05-19T07:50:57
2020-05-20T12:23:07
2020-05-20T12:23:07
Can I recommend the following: For ANLI, use https://github.com/facebookresearch/anli. As that paper says, "Our dataset is not to be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself ฮฑNLI, or ART.". Indeed, the paper cited under what is currently called anli says in the abstract "We int...
douwekiela
https://github.com/huggingface/datasets/issues/165
null
false
620,540,250
164
Add Spanish POR and NER Datasets
closed
[]
2020-05-18T22:18:21
2020-05-25T16:28:45
2020-05-25T16:28:45
Hi guys, In order to cover multilingual support a little step could be adding standard Datasets used for Spanish NER and POS tasks. I can provide it in raw and preprocessed formats.
mrm8488
https://github.com/huggingface/datasets/issues/164
null
false
620,534,307
163
[Feature request] Add cos-e v1.0
closed
[]
2020-05-18T22:05:26
2020-06-16T23:15:25
2020-06-16T18:52:06
I noticed the second release of cos-e (v1.11) is included in this repo. I wanted to request inclusion of v1.0, since this is the version on which results are reported on in [the paper](https://www.aclweb.org/anthology/P19-1487/), and v1.11 has noted [annotation](https://github.com/salesforce/cos-e/issues/2) [issues](ht...
sarahwie
https://github.com/huggingface/datasets/issues/163
null
false
620,513,554
162
fix prev files hash in map
closed
[]
2020-05-18T21:20:51
2020-05-18T21:36:21
2020-05-18T21:36:20
Fix the `.map` issue in #160. This makes sure it takes the previous files when computing the hash.
lhoestq
https://github.com/huggingface/datasets/pull/162
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/162", "html_url": "https://github.com/huggingface/datasets/pull/162", "diff_url": "https://github.com/huggingface/datasets/pull/162.diff", "patch_url": "https://github.com/huggingface/datasets/pull/162.patch", "merged_at": "2020-05-18T21:36:20"...
true
620,487,535
161
Discussion on version identifier & MockDataLoaderManager for test data
open
[]
2020-05-18T20:31:30
2020-05-24T18:10:03
null
Hi, I'm working on adding a dataset and ran into an error due to `download` not being defined on `MockDataLoaderManager`, but being defined in `nlp/utils/download_manager.py`. The readme step running this: `RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_localmydatasetname` triggers ...
EntilZha
https://github.com/huggingface/datasets/issues/161
null
false
620,448,236
160
caching in map causes same result to be returned for train, validation and test
closed
[]
2020-05-18T19:22:03
2020-05-18T21:36:20
2020-05-18T21:36:20
hello, I am working on a program that uses the `nlp` library with the `SST2` dataset. The rough outline of the program is: ``` import nlp as nlp_datasets ... parser.add_argument('--dataset', help='HuggingFace Datasets id', default=['glue', 'sst2'], nargs='+') ... dataset = nlp_datasets.load_dataset(*args....
dpressel
https://github.com/huggingface/datasets/issues/160
null
false
620,420,700
159
How can we add more datasets to nlp library?
closed
[]
2020-05-18T18:35:31
2020-05-18T18:37:08
2020-05-18T18:37:07
Tahsin-Mayeesha
https://github.com/huggingface/datasets/issues/159
null
false
620,396,658
158
add Toronto Books Corpus
closed
[]
2020-05-18T17:54:45
2020-06-11T07:49:15
2020-05-19T07:34:56
This PR adds the Toronto Books Corpus. . It on consider TMX and plain text files (Moses) defined in the table **Statistics and TMX/Moses Downloads** [here](http://opus.nlpl.eu/Books.php )
mariamabarham
https://github.com/huggingface/datasets/pull/158
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/158", "html_url": "https://github.com/huggingface/datasets/pull/158", "diff_url": "https://github.com/huggingface/datasets/pull/158.diff", "patch_url": "https://github.com/huggingface/datasets/pull/158.patch", "merged_at": null }
true
620,356,542
157
nlp.load_dataset() gives "TypeError: list_() takes exactly one argument (2 given)"
closed
[]
2020-05-18T16:46:38
2020-06-05T08:08:58
2020-06-05T08:08:58
I'm trying to load datasets from nlp but there seems to have error saying "TypeError: list_() takes exactly one argument (2 given)" gist can be found here https://gist.github.com/saahiluppal/c4b878f330b10b9ab9762bc0776c0a6a
saahiluppal
https://github.com/huggingface/datasets/issues/157
null
false
620,263,687
156
SyntaxError with WMT datasets
closed
[]
2020-05-18T14:38:18
2020-07-23T16:41:55
2020-07-23T16:41:55
The following snippet produces a syntax error: ``` import nlp dataset = nlp.load_dataset('wmt14') print(dataset['train'][0]) ``` ``` Traceback (most recent call last): File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code exec(code_obj, self....
tomhosking
https://github.com/huggingface/datasets/issues/156
null
false
620,067,946
155
Include more links in README, fix typos
closed
[]
2020-05-18T09:47:08
2020-05-28T08:31:57
2020-05-28T08:31:57
Include more links and fix typos in README
bharatr21
https://github.com/huggingface/datasets/pull/155
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/155", "html_url": "https://github.com/huggingface/datasets/pull/155", "diff_url": "https://github.com/huggingface/datasets/pull/155.diff", "patch_url": "https://github.com/huggingface/datasets/pull/155.patch", "merged_at": "2020-05-28T08:31:57"...
true
620,059,066
154
add Ubuntu Dialogs Corpus datasets
closed
[]
2020-05-18T09:34:48
2020-05-18T10:12:28
2020-05-18T10:12:27
This PR adds the Ubuntu Dialog Corpus datasets version 2.0.
mariamabarham
https://github.com/huggingface/datasets/pull/154
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/154", "html_url": "https://github.com/huggingface/datasets/pull/154", "diff_url": "https://github.com/huggingface/datasets/pull/154.diff", "patch_url": "https://github.com/huggingface/datasets/pull/154.patch", "merged_at": "2020-05-18T10:12:27"...
true
619,972,246
153
Meta-datasets (GLUE/XTREME/...) โ€“ Special care to attributions and citations
open
[]
2020-05-18T07:24:22
2020-05-18T21:18:16
null
Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessibl...
thomwolf
https://github.com/huggingface/datasets/issues/153
null
false
619,971,900
152
Add GLUE config name check
closed
[]
2020-05-18T07:23:43
2020-05-27T22:09:12
2020-05-27T22:09:12
Fixes #130 by adding a name check to the Glue class
bharatr21
https://github.com/huggingface/datasets/pull/152
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/152", "html_url": "https://github.com/huggingface/datasets/pull/152", "diff_url": "https://github.com/huggingface/datasets/pull/152.diff", "patch_url": "https://github.com/huggingface/datasets/pull/152.patch", "merged_at": null }
true
619,968,480
151
Fix JSON tests.
closed
[]
2020-05-18T07:17:38
2020-05-18T07:21:52
2020-05-18T07:21:51
jplu
https://github.com/huggingface/datasets/pull/151
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/151", "html_url": "https://github.com/huggingface/datasets/pull/151", "diff_url": "https://github.com/huggingface/datasets/pull/151.diff", "patch_url": "https://github.com/huggingface/datasets/pull/151.patch", "merged_at": "2020-05-18T07:21:51"...
true
619,809,645
150
Add WNUT 17 NER dataset
closed
[]
2020-05-17T22:19:04
2020-05-26T20:37:59
2020-05-26T20:37:59
Hi, this PR adds the WNUT 17 dataset to `nlp`. > Emerging and Rare entity recognition > This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisati...
stefan-it
https://github.com/huggingface/datasets/pull/150
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/150", "html_url": "https://github.com/huggingface/datasets/pull/150", "diff_url": "https://github.com/huggingface/datasets/pull/150.diff", "patch_url": "https://github.com/huggingface/datasets/pull/150.patch", "merged_at": "2020-05-26T20:37:59"...
true
619,735,739
149
[Feature request] Add Ubuntu Dialogue Corpus dataset
closed
[]
2020-05-17T15:42:39
2020-05-18T17:01:46
2020-05-18T17:01:46
https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/
danth
https://github.com/huggingface/datasets/issues/149
null
false
619,590,555
148
_download_and_prepare() got an unexpected keyword argument 'verify_infos'
closed
[]
2020-05-17T01:48:53
2020-05-18T07:38:33
2020-05-18T07:38:33
# Reproduce In Colab, ``` %pip install -q nlp %pip install -q apache_beam mwparserfromhell dataset = nlp.load_dataset('wikipedia') ``` get ``` Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/w...
richarddwang
https://github.com/huggingface/datasets/issues/148
null
false
619,581,907
147
Error with sklearn train_test_split
closed
[]
2020-05-17T00:28:24
2020-06-18T16:23:23
2020-06-18T16:23:23
It would be nice if we could use sklearn `train_test_split` to quickly generate subsets from the dataset objects returned by `nlp.load_dataset`. At the moment the code: ```python data = nlp.load_dataset('imdb', cache_dir=data_cache) f_half, s_half = train_test_split(data['train'], test_size=0.5, random_state=seed)...
ClonedOne
https://github.com/huggingface/datasets/issues/147
null
false
619,564,653
146
Add BERTScore to metrics
closed
[]
2020-05-16T22:09:39
2020-05-17T22:22:10
2020-05-17T22:22:09
This PR adds [BERTScore](https://arxiv.org/abs/1904.09675) to metrics. Here is an example of how to use it. ```sh import nlp bertscore = nlp.load_metric('metrics/bertscore') # or simply nlp.load_metric('bertscore') after this is added to huggingface's s3 bucket predictions = ['example', 'fruit'] references = [[...
felixgwu
https://github.com/huggingface/datasets/pull/146
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/146", "html_url": "https://github.com/huggingface/datasets/pull/146", "diff_url": "https://github.com/huggingface/datasets/pull/146.diff", "patch_url": "https://github.com/huggingface/datasets/pull/146.patch", "merged_at": "2020-05-17T22:22:09"...
true
619,480,549
145
[AWS Tests] Follow-up PR from #144
closed
[]
2020-05-16T13:53:46
2020-05-16T13:54:23
2020-05-16T13:54:22
I forgot to add this line in PR #145 .
patrickvonplaten
https://github.com/huggingface/datasets/pull/145
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/145", "html_url": "https://github.com/huggingface/datasets/pull/145", "diff_url": "https://github.com/huggingface/datasets/pull/145.diff", "patch_url": "https://github.com/huggingface/datasets/pull/145.patch", "merged_at": "2020-05-16T13:54:22"...
true
619,477,367
144
[AWS tests] AWS test should not run for canonical datasets
closed
[]
2020-05-16T13:39:30
2020-05-16T13:44:34
2020-05-16T13:44:33
AWS tests should in general not run for canonical datasets. Only local tests will run in this case. This way a PR is able to pass when adding a new dataset. This PR changes to logic to the following: 1) All datasets that are present in `nlp/datasets` are tested only locally. This way when one adds a canonical da...
patrickvonplaten
https://github.com/huggingface/datasets/pull/144
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/144", "html_url": "https://github.com/huggingface/datasets/pull/144", "diff_url": "https://github.com/huggingface/datasets/pull/144.diff", "patch_url": "https://github.com/huggingface/datasets/pull/144.patch", "merged_at": "2020-05-16T13:44:33"...
true
619,457,641
143
ArrowTypeError in squad metrics
closed
[]
2020-05-16T12:06:37
2020-05-22T13:38:52
2020-05-22T13:36:48
`squad_metric.compute` is giving following error ``` ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type ``` This is how my predictions and references lo...
patil-suraj
https://github.com/huggingface/datasets/issues/143
null
false
619,450,068
142
[WMT] Add all wmt
closed
[]
2020-05-16T11:28:46
2020-05-17T12:18:21
2020-05-17T12:18:20
This PR adds all wmt datasets scripts. At the moment the script is **not** functional for the language pairs "cs-en", "ru-en", "hi-en" because apparently it takes up to a week to get the manual data for these datasets: see http://ufal.mff.cuni.cz/czeng. The datasets are fully functional though for the "big" languag...
patrickvonplaten
https://github.com/huggingface/datasets/pull/142
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/142", "html_url": "https://github.com/huggingface/datasets/pull/142", "diff_url": "https://github.com/huggingface/datasets/pull/142.diff", "patch_url": "https://github.com/huggingface/datasets/pull/142.patch", "merged_at": "2020-05-17T12:18:20"...
true
619,447,090
141
[Clean up] remove bogus folder
closed
[]
2020-05-16T11:13:42
2020-05-16T13:24:27
2020-05-16T13:24:26
@mariamabarham - I think you accidentally placed it there.
patrickvonplaten
https://github.com/huggingface/datasets/pull/141
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/141", "html_url": "https://github.com/huggingface/datasets/pull/141", "diff_url": "https://github.com/huggingface/datasets/pull/141.diff", "patch_url": "https://github.com/huggingface/datasets/pull/141.patch", "merged_at": "2020-05-16T13:24:25"...
true
619,443,613
140
[Tests] run local tests as default
closed
[]
2020-05-16T10:56:06
2020-05-16T13:21:44
2020-05-16T13:21:43
This PR also enables local tests by default I think it's safer for now to enable both local and aws tests for every commit. The problem currently is that when we do a PR to add a dataset, the dataset is not yet on AWS on therefore not tested on the PR itself. Thus the PR will always be green even if the datasets are...
patrickvonplaten
https://github.com/huggingface/datasets/pull/140
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/140", "html_url": "https://github.com/huggingface/datasets/pull/140", "diff_url": "https://github.com/huggingface/datasets/pull/140.diff", "patch_url": "https://github.com/huggingface/datasets/pull/140.patch", "merged_at": "2020-05-16T13:21:43"...
true
619,327,409
139
Add GermEval 2014 NER dataset
closed
[]
2020-05-15T23:42:09
2020-05-16T13:56:37
2020-05-16T13:56:22
Hi, this PR adds the GermEval 2014 NER dataset ๐Ÿ˜ƒ > The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation [1] with the following properties: > - The data was sampled from German Wikipedia and News Corpora as a collection of citations. > - The dataset covers over 31,000...
stefan-it
https://github.com/huggingface/datasets/pull/139
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/139", "html_url": "https://github.com/huggingface/datasets/pull/139", "diff_url": "https://github.com/huggingface/datasets/pull/139.diff", "patch_url": "https://github.com/huggingface/datasets/pull/139.patch", "merged_at": "2020-05-16T13:56:22"...
true
619,225,191
138
Consider renaming to nld
closed
[]
2020-05-15T20:23:27
2022-09-16T05:18:22
2020-09-28T00:08:10
Hey :) Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing. The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This...
honnibal
https://github.com/huggingface/datasets/issues/138
null
false
619,211,018
136
Update README.md
closed
[]
2020-05-15T20:01:07
2020-05-17T12:17:28
2020-05-17T12:17:28
small typo
renaud
https://github.com/huggingface/datasets/pull/136
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/136", "html_url": "https://github.com/huggingface/datasets/pull/136", "diff_url": "https://github.com/huggingface/datasets/pull/136.diff", "patch_url": "https://github.com/huggingface/datasets/pull/136.patch", "merged_at": null }
true
619,206,708
135
Fix print statement in READ.md
closed
[]
2020-05-15T19:52:23
2020-05-17T12:14:06
2020-05-17T12:14:05
print statement was throwing generator object instead of printing names of available datasets/metrics
codehunk628
https://github.com/huggingface/datasets/pull/135
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/135", "html_url": "https://github.com/huggingface/datasets/pull/135", "diff_url": "https://github.com/huggingface/datasets/pull/135.diff", "patch_url": "https://github.com/huggingface/datasets/pull/135.patch", "merged_at": "2020-05-17T12:14:05"...
true
619,112,641
134
Update README.md
closed
[]
2020-05-15T16:56:14
2020-05-28T08:21:49
2020-05-28T08:21:49
pranv
https://github.com/huggingface/datasets/pull/134
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/134", "html_url": "https://github.com/huggingface/datasets/pull/134", "diff_url": "https://github.com/huggingface/datasets/pull/134.diff", "patch_url": "https://github.com/huggingface/datasets/pull/134.patch", "merged_at": null }
true
619,094,954
133
[Question] Using/adding a local dataset
closed
[]
2020-05-15T16:26:06
2020-07-23T16:44:09
2020-07-23T16:44:09
Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets. It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this. ...
zphang
https://github.com/huggingface/datasets/issues/133
null
false
619,077,851
132
[Feature Request] Add the OpenWebText dataset
closed
[]
2020-05-15T15:57:29
2020-10-07T14:22:48
2020-10-07T14:22:48
The OpenWebText dataset is an open clone of OpenAI's WebText dataset. It can be used to train ELECTRA as is specified in the [README](https://www.github.com/google-research/electra). More information and the download link are available [here](https://skylion007.github.io/OpenWebTextCorpus/).
LysandreJik
https://github.com/huggingface/datasets/issues/132
null
false
619,073,731
131
[Feature request] Add Toronto BookCorpus dataset
closed
[]
2020-05-15T15:50:44
2020-06-28T21:27:31
2020-06-28T21:27:31
I know the copyright/distribution of this one is complex, but it would be great to have! That, combined with the existing `wikitext`, would provide a complete dataset for pretraining models like BERT.
jarednielsen
https://github.com/huggingface/datasets/issues/131
null
false
619,035,440
130
Loading GLUE dataset loads CoLA by default
closed
[]
2020-05-15T14:55:50
2020-05-27T22:08:15
2020-05-27T22:08:15
If I run: ```python dataset = nlp.load_dataset('glue') ``` The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling: ```python metric = nlp.load_metric("glue") ``` which throws an error telling the user that they need to specify a task in GLUE. Should the...
zphang
https://github.com/huggingface/datasets/issues/130
null
false
618,997,725
129
[Feature request] Add Google Natural Question dataset
closed
[]
2020-05-15T14:14:20
2020-07-23T13:21:29
2020-07-23T13:21:29
Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD.
elyase
https://github.com/huggingface/datasets/issues/129
null
false
618,951,117
128
Some error inside nlp.load_dataset()
closed
[]
2020-05-15T13:01:29
2020-05-15T13:10:40
2020-05-15T13:10:40
First of all, nice work! I am going through [this overview notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb) In simple step `dataset = nlp.load_dataset('squad', split='validation[:10%]')` I get an error, which is connected with some inner code, I think: `...
polkaYK
https://github.com/huggingface/datasets/issues/128
null
false
618,909,042
127
Update Overview.ipynb
closed
[]
2020-05-15T11:46:48
2020-05-15T11:47:27
2020-05-15T11:47:25
update notebook
patrickvonplaten
https://github.com/huggingface/datasets/pull/127
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/127", "html_url": "https://github.com/huggingface/datasets/pull/127", "diff_url": "https://github.com/huggingface/datasets/pull/127.diff", "patch_url": "https://github.com/huggingface/datasets/pull/127.patch", "merged_at": "2020-05-15T11:47:25"...
true
618,897,499
126
remove webis
closed
[]
2020-05-15T11:25:20
2020-05-15T11:31:24
2020-05-15T11:30:26
Remove webis from dataset folder. Our first dataset script that only lives on AWS :-) https://s3.console.aws.amazon.com/s3/buckets/datasets.huggingface.co/nlp/datasets/webis/tl_dr/?region=us-east-1 @julien-c @jplu
patrickvonplaten
https://github.com/huggingface/datasets/pull/126
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/126", "html_url": "https://github.com/huggingface/datasets/pull/126", "diff_url": "https://github.com/huggingface/datasets/pull/126.diff", "patch_url": "https://github.com/huggingface/datasets/pull/126.patch", "merged_at": "2020-05-15T11:30:26"...
true
618,869,048
125
[Newsroom] add newsroom
closed
[]
2020-05-15T10:34:34
2020-05-15T10:37:07
2020-05-15T10:37:02
I checked it with the data link of the mail you forwarded @thomwolf => works well!
patrickvonplaten
https://github.com/huggingface/datasets/pull/125
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/125", "html_url": "https://github.com/huggingface/datasets/pull/125", "diff_url": "https://github.com/huggingface/datasets/pull/125.diff", "patch_url": "https://github.com/huggingface/datasets/pull/125.patch", "merged_at": "2020-05-15T10:37:02"...
true
618,864,284
124
Xsum, require manual download of some files
closed
[]
2020-05-15T10:26:13
2020-05-15T11:04:48
2020-05-15T11:04:46
mariamabarham
https://github.com/huggingface/datasets/pull/124
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/124", "html_url": "https://github.com/huggingface/datasets/pull/124", "diff_url": "https://github.com/huggingface/datasets/pull/124.diff", "patch_url": "https://github.com/huggingface/datasets/pull/124.patch", "merged_at": "2020-05-15T11:04:46"...
true
618,820,140
123
[Tests] Local => aws
closed
[]
2020-05-15T09:12:25
2020-05-15T10:06:12
2020-05-15T10:03:26
## Change default Test from local => aws As a default we set` aws=True`, `Local=False`, `slow=False` ### 1. RUN_AWS=1 (default) This runs 4 tests per dataset script. a) Does the dataset script have a valid etag / Can it be reached on AWS? b) Can we load its `builder_class`? c) Can we load **all** dataset c...
patrickvonplaten
https://github.com/huggingface/datasets/pull/123
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/123", "html_url": "https://github.com/huggingface/datasets/pull/123", "diff_url": "https://github.com/huggingface/datasets/pull/123.diff", "patch_url": "https://github.com/huggingface/datasets/pull/123.patch", "merged_at": "2020-05-15T10:03:26"...
true
618,813,182
122
Final cleanup of readme and metrics
closed
[]
2020-05-15T09:00:52
2021-09-03T19:40:09
2020-05-15T09:02:22
thomwolf
https://github.com/huggingface/datasets/pull/122
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/122", "html_url": "https://github.com/huggingface/datasets/pull/122", "diff_url": "https://github.com/huggingface/datasets/pull/122.diff", "patch_url": "https://github.com/huggingface/datasets/pull/122.patch", "merged_at": "2020-05-15T09:02:22"...
true
618,790,040
121
make style
closed
[]
2020-05-15T08:23:36
2020-05-15T08:25:39
2020-05-15T08:25:38
patrickvonplaten
https://github.com/huggingface/datasets/pull/121
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/121", "html_url": "https://github.com/huggingface/datasets/pull/121", "diff_url": "https://github.com/huggingface/datasets/pull/121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/121.patch", "merged_at": "2020-05-15T08:25:38"...
true
618,737,783
120
๐Ÿ› `map` not working
closed
[]
2020-05-15T06:43:08
2020-05-15T07:02:38
2020-05-15T07:02:38
I'm trying to run a basic example (mapping function to add a prefix). [Here is the colab notebook I'm using.](https://colab.research.google.com/drive/1YH4JCAy0R1MMSc-k_Vlik_s1LEzP_t1h?usp=sharing) ```python import nlp dataset = nlp.load_dataset('squad', split='validation[:10%]') def test(sample): samp...
astariul
https://github.com/huggingface/datasets/issues/120
null
false
618,652,145
119
๐Ÿ› Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
closed
[]
2020-05-15T02:27:26
2020-05-15T05:11:22
2020-05-15T02:45:28
I'm trying to load CNN/DM dataset on Colab. [Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing) But I meet this error : > AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
astariul
https://github.com/huggingface/datasets/issues/119
null
false