id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
663,079,359
423
Change features vs schema logic
closed
[]
2020-07-21T14:52:47
2020-07-25T09:08:34
2020-07-23T10:15:17
## New logic for `nlp.Features` in datasets Previously, it was confusing to have `features` and pyarrow's `schema` in `nlp.Dataset`. However `features` is supposed to be the front-facing object to define the different fields of a dataset, while `schema` is only used to write arrow files. Changes: - Remove `sche...
lhoestq
https://github.com/huggingface/datasets/pull/423
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/423", "html_url": "https://github.com/huggingface/datasets/pull/423", "diff_url": "https://github.com/huggingface/datasets/pull/423.diff", "patch_url": "https://github.com/huggingface/datasets/pull/423.patch", "merged_at": "2020-07-23T10:15:16"...
true
663,028,497
422
- Corrected encoding for IMDB.
closed
[]
2020-07-21T13:46:59
2020-07-22T16:02:53
2020-07-22T16:02:53
The preparation phase (after the download phase) crashed on windows because of charmap encoding not being able to decode certain characters. This change suggested in Issue #347 fixes it for the IMDB dataset.
ghazi-f
https://github.com/huggingface/datasets/pull/422
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/422", "html_url": "https://github.com/huggingface/datasets/pull/422", "diff_url": "https://github.com/huggingface/datasets/pull/422.diff", "patch_url": "https://github.com/huggingface/datasets/pull/422.patch", "merged_at": "2020-07-22T16:02:53"...
true
662,213,864
421
Style change
closed
[]
2020-07-20T20:08:29
2020-07-22T16:08:40
2020-07-22T16:08:39
make quality and make style ran on scripts
lordtt13
https://github.com/huggingface/datasets/pull/421
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/421", "html_url": "https://github.com/huggingface/datasets/pull/421", "diff_url": "https://github.com/huggingface/datasets/pull/421.diff", "patch_url": "https://github.com/huggingface/datasets/pull/421.patch", "merged_at": null }
true
662,029,782
420
Better handle nested features
closed
[]
2020-07-20T16:44:13
2020-07-21T08:20:49
2020-07-21T08:09:52
Changes: - added arrow schema to features conversion (it's going to be useful to fix #342 ) - make flatten handle deep features (useful for tfrecords conversion in #339 ) - add tests for flatten and features conversions - the reader now returns the kwargs to instantiate a Dataset (fix circular dependencies)
lhoestq
https://github.com/huggingface/datasets/pull/420
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/420", "html_url": "https://github.com/huggingface/datasets/pull/420", "diff_url": "https://github.com/huggingface/datasets/pull/420.diff", "patch_url": "https://github.com/huggingface/datasets/pull/420.patch", "merged_at": "2020-07-21T08:09:51"...
true
661,974,747
419
EmoContext dataset add
closed
[]
2020-07-20T15:48:45
2020-07-24T08:22:01
2020-07-24T08:22:00
EmoContext Dataset add Signed-off-by: lordtt13 <[email protected]>
lordtt13
https://github.com/huggingface/datasets/pull/419
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/419", "html_url": "https://github.com/huggingface/datasets/pull/419", "diff_url": "https://github.com/huggingface/datasets/pull/419.diff", "patch_url": "https://github.com/huggingface/datasets/pull/419.patch", "merged_at": "2020-07-24T08:22:00"...
true
661,914,873
418
Addition of google drive links to dl_manager
closed
[]
2020-07-20T14:52:02
2020-07-20T15:39:32
2020-07-20T15:39:32
Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown. This is the script for me: ```python class EmoConfig(nlp.BuilderConfig): """BuilderConfig ...
lordtt13
https://github.com/huggingface/datasets/issues/418
null
false
661,804,054
417
Fix docstrins multiple metrics instances
closed
[]
2020-07-20T13:08:59
2020-07-22T09:51:00
2020-07-22T09:50:59
We change the docstrings of `nlp.Metric.compute`, `nlp.Metric.add` and `nlp.Metric.add_batch` depending on which metric is instantiated. However we had issues when instantiating multiple metrics (docstrings were duplicated). This should fix #304
lhoestq
https://github.com/huggingface/datasets/pull/417
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/417", "html_url": "https://github.com/huggingface/datasets/pull/417", "diff_url": "https://github.com/huggingface/datasets/pull/417.diff", "patch_url": "https://github.com/huggingface/datasets/pull/417.patch", "merged_at": "2020-07-22T09:50:58"...
true
661,635,393
416
Fix xtreme panx directory
closed
[]
2020-07-20T10:09:17
2020-07-21T08:15:46
2020-07-21T08:15:44
Fix #412
lhoestq
https://github.com/huggingface/datasets/pull/416
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/416", "html_url": "https://github.com/huggingface/datasets/pull/416", "diff_url": "https://github.com/huggingface/datasets/pull/416.diff", "patch_url": "https://github.com/huggingface/datasets/pull/416.patch", "merged_at": "2020-07-21T08:15:44"...
true
660,687,076
415
Something is wrong with WMT 19 kk-en dataset
open
[]
2020-07-19T08:18:51
2020-07-20T09:54:26
null
The translation in the `train` set does not look right: ``` >>>import nlp >>>from nlp import load_dataset >>>dataset = load_dataset('wmt19', 'kk-en') >>>dataset["train"]["translation"][0] {'kk': 'Trumpian Uncertainty', 'en': 'Трамптық белгісіздік'} >>>dataset["validation"]["translation"][0] {'kk': 'Ақша-несие...
ChenghaoMou
https://github.com/huggingface/datasets/issues/415
null
false
660,654,013
414
from_dict delete?
closed
[]
2020-07-19T07:08:36
2020-07-21T02:21:17
2020-07-21T02:21:17
AttributeError: type object 'Dataset' has no attribute 'from_dict'
hackerxiaobai
https://github.com/huggingface/datasets/issues/414
null
false
660,063,655
413
Is there a way to download only NQ dev?
closed
[]
2020-07-18T10:28:23
2022-02-11T09:50:21
2022-02-11T09:50:21
Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('natural_questions', split="validation", bea...
tholor
https://github.com/huggingface/datasets/issues/413
null
false
660,047,139
412
Unable to load XTREME dataset from disk
closed
[]
2020-07-18T09:55:00
2020-07-21T08:15:44
2020-07-21T08:15:44
Hi 🤗 team! ## Description of the problem Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark. I have manually downloaded the `AmazonPho...
lewtun
https://github.com/huggingface/datasets/issues/412
null
false
659,393,398
411
Sbf
closed
[]
2020-07-17T16:19:45
2020-07-21T09:13:46
2020-07-21T09:13:45
This PR adds the Social Bias Frames Dataset (ACL 2020) . dataset homepage: https://homes.cs.washington.edu/~msap/social-bias-frames/
mariamabarham
https://github.com/huggingface/datasets/pull/411
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/411", "html_url": "https://github.com/huggingface/datasets/pull/411", "diff_url": "https://github.com/huggingface/datasets/pull/411.diff", "patch_url": "https://github.com/huggingface/datasets/pull/411.patch", "merged_at": "2020-07-21T09:13:45"...
true
659,242,871
410
20newsgroup
closed
[]
2020-07-17T13:07:57
2020-07-20T07:05:29
2020-07-20T07:05:28
Add 20Newsgroup dataset. #353
mariamabarham
https://github.com/huggingface/datasets/pull/410
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/410", "html_url": "https://github.com/huggingface/datasets/pull/410", "diff_url": "https://github.com/huggingface/datasets/pull/410.diff", "patch_url": "https://github.com/huggingface/datasets/pull/410.patch", "merged_at": "2020-07-20T07:05:28"...
true
659,128,611
409
train_test_split error: 'dict' object has no attribute 'deepcopy'
closed
[]
2020-07-17T10:36:28
2020-07-21T14:34:52
2020-07-21T14:34:52
`train_test_split` is giving me an error when I try and call it: `'dict' object has no attribute 'deepcopy'` ## To reproduce ``` dataset = load_dataset('glue', 'mrpc', split='train') dataset = dataset.train_test_split(test_size=0.2) ``` ## Full Stacktrace ``` -------------------------------------------...
morganmcg1
https://github.com/huggingface/datasets/issues/409
null
false
659,064,144
408
Add tests datasets gcp
closed
[]
2020-07-17T09:23:27
2020-07-17T09:26:57
2020-07-17T09:26:56
Some datasets are available on our google cloud storage in arrow format, so that the users don't need to process the data. These tests make sure that they're always available. It also makes sure that their scripts are in sync between S3 and the repo. This should avoid future issues like #407
lhoestq
https://github.com/huggingface/datasets/pull/408
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/408", "html_url": "https://github.com/huggingface/datasets/pull/408", "diff_url": "https://github.com/huggingface/datasets/pull/408.diff", "patch_url": "https://github.com/huggingface/datasets/pull/408.patch", "merged_at": "2020-07-17T09:26:56"...
true
658,672,736
407
MissingBeamOptions for Wikipedia 20200501.en
closed
[]
2020-07-16T23:48:03
2021-01-12T11:41:16
2020-07-17T14:24:28
There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: ``` Downloading and preparing dataset wikipedia...
mitchellgordon95
https://github.com/huggingface/datasets/issues/407
null
false
658,581,764
406
Faster Shuffling?
closed
[]
2020-07-16T21:21:53
2023-08-16T09:52:39
2020-09-07T14:45:25
Consider shuffling bookcorpus: ``` dataset = nlp.load_dataset('bookcorpus', split='train') dataset.shuffle() ``` According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`...
mitchellgordon95
https://github.com/huggingface/datasets/issues/406
null
false
658,580,192
405
Make select() faster by batching reads
closed
[]
2020-07-16T21:19:45
2020-07-17T17:05:44
2020-07-17T16:51:26
Here's a benchmark: ``` dataset = nlp.load_dataset('bookcorpus', split='train') start = time.time() dataset.select(np.arange(1000), reader_batch_size=1, load_from_cache_file=False) end = time.time() print(f'{end - start}') start = time.time() dataset.select(np.arange(1000), reader_batch_size=1000, load_fr...
mitchellgordon95
https://github.com/huggingface/datasets/pull/405
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/405", "html_url": "https://github.com/huggingface/datasets/pull/405", "diff_url": "https://github.com/huggingface/datasets/pull/405.diff", "patch_url": "https://github.com/huggingface/datasets/pull/405.patch", "merged_at": "2020-07-17T16:51:26"...
true
658,400,987
404
Add seed in metrics
closed
[]
2020-07-16T17:27:05
2020-07-20T10:12:35
2020-07-20T10:12:34
With #361 we noticed that some metrics were not deterministic. In this PR I allow the user to specify numpy's seed when instantiating a metric with `load_metric`. The seed is set only when `compute` is called, and reset afterwards. Moreover when calling `compute` with the same metric instance (i.e. same experiment...
lhoestq
https://github.com/huggingface/datasets/pull/404
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/404", "html_url": "https://github.com/huggingface/datasets/pull/404", "diff_url": "https://github.com/huggingface/datasets/pull/404.diff", "patch_url": "https://github.com/huggingface/datasets/pull/404.patch", "merged_at": "2020-07-20T10:12:34"...
true
658,325,756
403
return python objects instead of arrays by default
closed
[]
2020-07-16T15:51:52
2020-07-17T11:37:01
2020-07-17T11:37:00
We were using to_pandas() to convert from arrow types, however it returns numpy arrays instead of python lists. I fixed it by using to_pydict/to_pylist instead. Fix #387 It was mentioned in https://github.com/huggingface/transformers/issues/5729
lhoestq
https://github.com/huggingface/datasets/pull/403
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/403", "html_url": "https://github.com/huggingface/datasets/pull/403", "diff_url": "https://github.com/huggingface/datasets/pull/403.diff", "patch_url": "https://github.com/huggingface/datasets/pull/403.patch", "merged_at": "2020-07-17T11:37:00"...
true
658,001,288
402
Search qa
closed
[]
2020-07-16T09:00:10
2020-07-16T14:27:00
2020-07-16T14:26:59
add SearchQA dataset #336
mariamabarham
https://github.com/huggingface/datasets/pull/402
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/402", "html_url": "https://github.com/huggingface/datasets/pull/402", "diff_url": "https://github.com/huggingface/datasets/pull/402.diff", "patch_url": "https://github.com/huggingface/datasets/pull/402.patch", "merged_at": "2020-07-16T14:26:59"...
true
657,996,252
401
add web_questions
closed
[]
2020-07-16T08:54:59
2020-08-06T06:16:20
2020-08-06T06:16:19
add Web Question dataset #336 Maybe @patrickvonplaten you can help with the dummy_data structure? it still broken
mariamabarham
https://github.com/huggingface/datasets/pull/401
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/401", "html_url": "https://github.com/huggingface/datasets/pull/401", "diff_url": "https://github.com/huggingface/datasets/pull/401.diff", "patch_url": "https://github.com/huggingface/datasets/pull/401.patch", "merged_at": "2020-08-06T06:16:19"...
true
657,975,600
400
Web questions
closed
[]
2020-07-16T08:28:29
2020-07-16T08:50:51
2020-07-16T08:42:54
add the WebQuestion dataset #336
mariamabarham
https://github.com/huggingface/datasets/pull/400
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/400", "html_url": "https://github.com/huggingface/datasets/pull/400", "diff_url": "https://github.com/huggingface/datasets/pull/400.diff", "patch_url": "https://github.com/huggingface/datasets/pull/400.patch", "merged_at": null }
true
657,841,433
399
Spelling mistake
closed
[]
2020-07-16T04:37:58
2020-07-16T06:49:48
2020-07-16T06:49:37
In "Formatting the dataset" part, "The two toehr modifications..." should be "The two other modifications..." ,the word "other" wrong spelled as "toehr".
BlancRay
https://github.com/huggingface/datasets/pull/399
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/399", "html_url": "https://github.com/huggingface/datasets/pull/399", "diff_url": "https://github.com/huggingface/datasets/pull/399.diff", "patch_url": "https://github.com/huggingface/datasets/pull/399.patch", "merged_at": "2020-07-16T06:49:37"...
true
657,511,962
398
Add inline links
closed
[]
2020-07-15T17:04:04
2020-07-22T10:14:22
2020-07-22T10:14:22
Add inline links to `Contributing.md`
bharatr21
https://github.com/huggingface/datasets/pull/398
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/398", "html_url": "https://github.com/huggingface/datasets/pull/398", "diff_url": "https://github.com/huggingface/datasets/pull/398.diff", "patch_url": "https://github.com/huggingface/datasets/pull/398.patch", "merged_at": "2020-07-22T10:14:22"...
true
657,510,856
397
Add contiguous sharding
closed
[]
2020-07-15T17:02:58
2020-07-17T16:59:31
2020-07-17T16:59:31
This makes dset.shard() play nice with nlp.concatenate_datasets(). When I originally wrote the shard() method, I was thinking about a distributed training scenario, but https://github.com/huggingface/nlp/pull/389 also uses it for splitting the dataset for distributed preprocessing. Usage: ``` nlp.concatenate_datas...
jarednielsen
https://github.com/huggingface/datasets/pull/397
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/397", "html_url": "https://github.com/huggingface/datasets/pull/397", "diff_url": "https://github.com/huggingface/datasets/pull/397.diff", "patch_url": "https://github.com/huggingface/datasets/pull/397.patch", "merged_at": "2020-07-17T16:59:30"...
true
657,477,952
396
Fix memory issue when doing select
closed
[]
2020-07-15T16:15:04
2020-07-16T08:07:32
2020-07-16T08:07:31
We were passing the `nlp.Dataset` object to get the hash for the new dataset's file name. Fix #395
lhoestq
https://github.com/huggingface/datasets/pull/396
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/396", "html_url": "https://github.com/huggingface/datasets/pull/396", "diff_url": "https://github.com/huggingface/datasets/pull/396.diff", "patch_url": "https://github.com/huggingface/datasets/pull/396.patch", "merged_at": "2020-07-16T08:07:30"...
true
657,454,983
395
Memory issue when doing select
closed
[]
2020-07-15T15:43:38
2020-07-16T08:07:31
2020-07-16T08:07:31
As noticed in #389, the following code loads the entire wikipedia in memory. ```python import nlp w = nlp.load_dataset("wikipedia", "20200501.en", split="train") w.select([0]) ``` This is caused by [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626) for some reason, that ...
lhoestq
https://github.com/huggingface/datasets/issues/395
null
false
657,425,548
394
Remove remaining nested dict
closed
[]
2020-07-15T15:05:52
2020-07-16T07:39:52
2020-07-16T07:39:51
This PR deletes the remaining unnecessary nested dict #378
mariamabarham
https://github.com/huggingface/datasets/pull/394
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/394", "html_url": "https://github.com/huggingface/datasets/pull/394", "diff_url": "https://github.com/huggingface/datasets/pull/394.diff", "patch_url": "https://github.com/huggingface/datasets/pull/394.patch", "merged_at": "2020-07-16T07:39:51"...
true
657,330,911
393
Fix extracted files directory for the DownloadManager
closed
[]
2020-07-15T12:59:55
2020-07-17T17:02:16
2020-07-17T17:02:14
The cache dir was often cluttered by extracted files because of the download manager. For downloaded files, we are using the `downloads` directory to make things easier to navigate, but extracted files were still placed at the root of the cache directory. To fix that I changed the directory for extracted files to ca...
lhoestq
https://github.com/huggingface/datasets/pull/393
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/393", "html_url": "https://github.com/huggingface/datasets/pull/393", "diff_url": "https://github.com/huggingface/datasets/pull/393.diff", "patch_url": "https://github.com/huggingface/datasets/pull/393.patch", "merged_at": "2020-07-17T17:02:14"...
true
657,313,738
392
Style change detection
closed
[]
2020-07-15T12:32:14
2020-07-21T13:18:36
2020-07-17T17:13:23
Another [PAN task](https://pan.webis.de/clef20/pan20-web/style-change-detection.html). This time about identifying when the style/author changes in documents. - There's the possibility of adding the [PAN19](https://zenodo.org/record/3577602) and PAN18 style change detection tasks too (these are datasets whose labels...
ghomasHudson
https://github.com/huggingface/datasets/pull/392
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/392", "html_url": "https://github.com/huggingface/datasets/pull/392", "diff_url": "https://github.com/huggingface/datasets/pull/392.diff", "patch_url": "https://github.com/huggingface/datasets/pull/392.patch", "merged_at": "2020-07-17T17:13:23"...
true
656,956,384
390
Concatenate datasets
closed
[]
2020-07-14T23:24:37
2020-07-22T09:49:58
2020-07-22T09:49:58
I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema. This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in...
jarednielsen
https://github.com/huggingface/datasets/pull/390
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/390", "html_url": "https://github.com/huggingface/datasets/pull/390", "diff_url": "https://github.com/huggingface/datasets/pull/390.diff", "patch_url": "https://github.com/huggingface/datasets/pull/390.patch", "merged_at": "2020-07-22T09:49:58"...
true
656,921,768
389
Fix pickling of SplitDict
closed
[]
2020-07-14T21:53:39
2020-08-04T14:38:10
2020-08-04T14:38:10
It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example: ``` wiki = nlp.load_dataset('wikipedia', split='train') def sentencize(examples): ... wiki = wiki.map(sentencize, batched=True) torch.save(wiki, '...
mitchellgordon95
https://github.com/huggingface/datasets/pull/389
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/389", "html_url": "https://github.com/huggingface/datasets/pull/389", "diff_url": "https://github.com/huggingface/datasets/pull/389.diff", "patch_url": "https://github.com/huggingface/datasets/pull/389.patch", "merged_at": null }
true
656,707,497
388
🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17
closed
[]
2020-07-14T15:36:41
2022-10-04T18:01:28
2022-10-04T18:01:28
1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code: ``` nlp.load_dataset('wmt14','de-en') nlp.load_dataset('wmt15','de-en') nlp.load_dataset('wmt17','de-en') nlp.load_dataset('wmt19','de-en') ``` The code runs but the download speed is **extremely slow**, the same behaviour is not ob...
SamuelCahyawijaya
https://github.com/huggingface/datasets/issues/388
null
false
656,361,357
387
Conversion through to_pandas output numpy arrays for lists instead of python objects
closed
[]
2020-07-14T06:24:01
2020-07-17T11:37:00
2020-07-17T11:37:00
In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects. Here is an example: ```python >>> dataset._data.slice(key, 1).to_pandas().to_dict("list") {'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting hi...
thomwolf
https://github.com/huggingface/datasets/issues/387
null
false
655,839,067
386
Update dataset loading and features - Add TREC dataset
closed
[]
2020-07-13T13:10:18
2020-07-16T08:17:58
2020-07-16T08:17:58
This PR: - add a template for a new dataset script - update the caching structure so that the path to the cached data files is also a function of the dataset loading script hash. This way when you update a loading script the data will be automatically updated instead of falling back to the previous version (which is ...
thomwolf
https://github.com/huggingface/datasets/pull/386
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/386", "html_url": "https://github.com/huggingface/datasets/pull/386", "diff_url": "https://github.com/huggingface/datasets/pull/386.diff", "patch_url": "https://github.com/huggingface/datasets/pull/386.patch", "merged_at": "2020-07-16T08:17:58"...
true
655,663,997
385
Remove unnecessary nested dict
closed
[]
2020-07-13T08:46:23
2020-07-15T11:27:38
2020-07-15T10:03:53
This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated: - MLQA - RACE Will be adding more if necessary. #378
mariamabarham
https://github.com/huggingface/datasets/pull/385
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/385", "html_url": "https://github.com/huggingface/datasets/pull/385", "diff_url": "https://github.com/huggingface/datasets/pull/385.diff", "patch_url": "https://github.com/huggingface/datasets/pull/385.patch", "merged_at": "2020-07-15T10:03:53"...
true
655,291,201
383
Adding the Linguistic Code-switching Evaluation (LinCE) benchmark
closed
[]
2020-07-11T22:35:20
2020-07-16T16:19:46
2020-07-16T16:19:46
Hi, First of all, this library is really cool! Thanks for putting all of this together! This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ): > 1. Why do we need LinCE? >LinCE brings 10 code-switching datasets t...
gaguilar
https://github.com/huggingface/datasets/pull/383
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/383", "html_url": "https://github.com/huggingface/datasets/pull/383", "diff_url": "https://github.com/huggingface/datasets/pull/383.diff", "patch_url": "https://github.com/huggingface/datasets/pull/383.patch", "merged_at": "2020-07-16T16:19:46"...
true
655,290,482
382
1080
closed
[]
2020-07-11T22:29:07
2020-07-11T22:49:38
2020-07-11T22:49:38
saq194
https://github.com/huggingface/datasets/issues/382
null
false
655,277,119
381
NLp
closed
[]
2020-07-11T20:50:14
2020-07-11T20:50:39
2020-07-11T20:50:39
Spartanthor
https://github.com/huggingface/datasets/issues/381
null
false
655,226,316
378
[dataset] Structure of MLQA seems unecessary nested
closed
[]
2020-07-11T15:16:08
2020-07-15T16:17:20
2020-07-15T16:17:20
The features of the MLQA dataset comprise several nested dictionaries with a single element inside (for `questions` and `ids`): https://github.com/huggingface/nlp/blob/master/datasets/mlqa/mlqa.py#L90-L97 Should we keep this @mariamabarham @patrickvonplaten? Was this added for compatibility with tfds? ```python ...
thomwolf
https://github.com/huggingface/datasets/issues/378
null
false
655,215,790
377
Iyy!!!
closed
[]
2020-07-11T14:11:07
2020-07-11T14:30:51
2020-07-11T14:30:51
ajinomoh
https://github.com/huggingface/datasets/issues/377
null
false
655,047,826
376
to_pandas conversion doesn't always work
closed
[]
2020-07-10T21:33:31
2022-10-04T18:05:39
2022-10-04T18:05:39
For some complex nested types, the conversion from Arrow to python dict through pandas doesn't seem to be possible. Here is an example using the official SQUAD v2 JSON file. This example was found while investigating #373. ```python >>> squad = load_dataset('json', data_files={nlp.Split.TRAIN: ["./train-v2.0....
thomwolf
https://github.com/huggingface/datasets/issues/376
null
false
655,023,307
375
TypeError when computing bertscore
closed
[]
2020-07-10T20:37:44
2022-06-01T15:15:59
2022-06-01T15:15:59
Hi, I installed nlp 0.3.0 via pip, and my python version is 3.7. When I tried to compute bertscore with the code: ``` import nlp bertscore = nlp.load_metric('bertscore') # load hyps and refs ... print (bertscore.compute(hyps, refs, lang='en')) ``` I got the following error. ``` Traceback (most rece...
willywsm1013
https://github.com/huggingface/datasets/issues/375
null
false
654,895,066
374
Add dataset post processing for faiss indexes
closed
[]
2020-07-10T16:25:59
2020-07-13T13:44:03
2020-07-13T13:44:01
# Post processing of datasets for faiss indexes Now that we can have datasets with embeddings (see `wiki_pr` for example), we can allow users to load the dataset + get the Faiss index that comes with it to do nearest neighbors queries. ## Implementation proposition - Faiss indexes have to be added to the `nlp....
lhoestq
https://github.com/huggingface/datasets/pull/374
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/374", "html_url": "https://github.com/huggingface/datasets/pull/374", "diff_url": "https://github.com/huggingface/datasets/pull/374.diff", "patch_url": "https://github.com/huggingface/datasets/pull/374.patch", "merged_at": "2020-07-13T13:44:01"...
true
654,845,133
373
Segmentation fault when loading local JSON dataset as of #372
closed
[]
2020-07-10T15:04:25
2022-10-04T18:05:47
2022-10-04T18:05:47
The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, f...
vegarab
https://github.com/huggingface/datasets/issues/373
null
false
654,774,420
372
Make the json script more flexible
closed
[]
2020-07-10T13:15:15
2020-07-10T14:52:07
2020-07-10T14:52:06
Fix https://github.com/huggingface/nlp/issues/359 Fix https://github.com/huggingface/nlp/issues/369 JSON script now can accept JSON files containing a single dict with the records as a list in one attribute to the dict (previously it only accepted JSON files containing records as rows of dicts in the file). In t...
thomwolf
https://github.com/huggingface/datasets/pull/372
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/372", "html_url": "https://github.com/huggingface/datasets/pull/372", "diff_url": "https://github.com/huggingface/datasets/pull/372.diff", "patch_url": "https://github.com/huggingface/datasets/pull/372.patch", "merged_at": "2020-07-10T14:52:05"...
true
654,668,242
371
Fix cached file path for metrics with different config names
closed
[]
2020-07-10T10:02:24
2020-07-10T13:45:22
2020-07-10T13:45:20
The config name was not taken into account to build the cached file path. It should fix #368
lhoestq
https://github.com/huggingface/datasets/pull/371
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/371", "html_url": "https://github.com/huggingface/datasets/pull/371", "diff_url": "https://github.com/huggingface/datasets/pull/371.diff", "patch_url": "https://github.com/huggingface/datasets/pull/371.patch", "merged_at": "2020-07-10T13:45:20"...
true
654,304,193
370
Allow indexing Dataset via np.ndarray
closed
[]
2020-07-09T19:43:15
2020-07-10T14:05:44
2020-07-10T14:05:43
jarednielsen
https://github.com/huggingface/datasets/pull/370
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/370", "html_url": "https://github.com/huggingface/datasets/pull/370", "diff_url": "https://github.com/huggingface/datasets/pull/370.diff", "patch_url": "https://github.com/huggingface/datasets/pull/370.patch", "merged_at": "2020-07-10T14:05:43"...
true
654,186,890
369
can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries
closed
[]
2020-07-09T16:16:53
2020-12-15T23:07:22
2020-07-10T14:52:06
Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB): ``` dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]}) ``` causes ``` Traceback (most recent call last): File "dataloader.py", line 9, in <module> ["./path/to/file.json"]}) File "/...
vegarab
https://github.com/huggingface/datasets/issues/369
null
false
654,087,251
368
load_metric can't acquire lock anymore
closed
[]
2020-07-09T14:04:09
2020-07-10T13:45:20
2020-07-10T13:45:20
I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this? Traceback (most recent call last): File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/n...
ydshieh
https://github.com/huggingface/datasets/issues/368
null
false
654,012,984
367
Update Xtreme to add PAWS-X es
closed
[]
2020-07-09T12:14:37
2020-07-09T12:37:11
2020-07-09T12:37:10
This PR adds the `PAWS-X.es` in the Xtreme dataset #362
mariamabarham
https://github.com/huggingface/datasets/pull/367
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/367", "html_url": "https://github.com/huggingface/datasets/pull/367", "diff_url": "https://github.com/huggingface/datasets/pull/367.diff", "patch_url": "https://github.com/huggingface/datasets/pull/367.patch", "merged_at": "2020-07-09T12:37:10"...
true
653,954,896
366
Add quora dataset
closed
[]
2020-07-09T10:34:22
2020-07-13T17:35:21
2020-07-13T17:35:21
Added the [Quora question pairs dataset](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs). Implementation Notes: - I used the original version provided on the quora website. There's also a [Kaggle competition](https://www.kaggle.com/c/quora-question-pairs) which has a nice train/test sp...
ghomasHudson
https://github.com/huggingface/datasets/pull/366
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/366", "html_url": "https://github.com/huggingface/datasets/pull/366", "diff_url": "https://github.com/huggingface/datasets/pull/366.diff", "patch_url": "https://github.com/huggingface/datasets/pull/366.patch", "merged_at": "2020-07-13T17:35:21"...
true
653,845,964
365
How to augment data ?
closed
[]
2020-07-09T07:52:37
2020-07-10T09:12:07
2020-07-10T08:22:15
Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = dataset.map(aug, batched=T...
astariul
https://github.com/huggingface/datasets/issues/365
null
false
653,821,597
364
add MS MARCO dataset
closed
[]
2020-07-09T07:11:19
2020-08-06T06:15:49
2020-08-06T06:15:48
This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including: - Passage and Document Retrieval - Keyphrase Extraction - QA and NLG This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pd...
mariamabarham
https://github.com/huggingface/datasets/pull/364
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/364", "html_url": "https://github.com/huggingface/datasets/pull/364", "diff_url": "https://github.com/huggingface/datasets/pull/364.diff", "patch_url": "https://github.com/huggingface/datasets/pull/364.patch", "merged_at": "2020-08-06T06:15:48"...
true
653,821,172
363
Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
closed
[]
2020-07-09T07:10:30
2020-08-24T09:59:35
2020-08-24T09:59:35
nlp/features.py: The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datas...
eltoto1219
https://github.com/huggingface/datasets/pull/363
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/363", "html_url": "https://github.com/huggingface/datasets/pull/363", "diff_url": "https://github.com/huggingface/datasets/pull/363.diff", "patch_url": "https://github.com/huggingface/datasets/pull/363.patch", "merged_at": "2020-08-24T09:59:35"...
true
653,766,245
362
[dateset subset missing] xtreme paws-x
closed
[]
2020-07-09T05:04:54
2020-07-09T12:38:42
2020-07-09T12:38:42
I tried nlp.load_dataset('xtreme', 'PAWS-X.es') but get the value error It turns out that the subset for Spanish is missing https://github.com/google-research-datasets/paws/tree/master/pawsx
cosmeowpawlitan
https://github.com/huggingface/datasets/issues/362
null
false
653,757,376
361
🐛 [Metrics] ROUGE is non-deterministic
closed
[]
2020-07-09T04:39:37
2022-09-09T15:20:55
2020-07-20T23:48:37
If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different. Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem. Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differe...
astariul
https://github.com/huggingface/datasets/issues/361
null
false
653,687,176
360
[Feature request] Add dataset.ragged_map() function for many-to-many transformations
closed
[]
2020-07-09T01:04:43
2020-07-09T19:31:51
2020-07-09T19:31:51
`dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines. `dataset.filter()` enables one-to-(one-or-none) transformations. Input one example and output either zero/one example. This is helpful for removing portions from t...
jarednielsen
https://github.com/huggingface/datasets/issues/360
null
false
653,656,279
359
ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures
closed
[]
2020-07-08T23:24:05
2020-07-10T14:52:06
2020-07-10T14:52:06
I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function. ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-23-9aecfbee53bd> in <mo...
timothyjlaurent
https://github.com/huggingface/datasets/issues/359
null
false
653,645,121
358
Starting to add some real doc
closed
[]
2020-07-08T22:53:03
2020-07-14T09:58:17
2020-07-14T09:58:15
Adding a lot of documentation for: - load a dataset - explore the dataset object - process data with the dataset - add a new dataset script - share a dataset script - full package reference This version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.htm...
thomwolf
https://github.com/huggingface/datasets/pull/358
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/358", "html_url": "https://github.com/huggingface/datasets/pull/358", "diff_url": "https://github.com/huggingface/datasets/pull/358.diff", "patch_url": "https://github.com/huggingface/datasets/pull/358.patch", "merged_at": "2020-07-14T09:58:15"...
true
653,642,292
357
Add hashes to cnn_dailymail
closed
[]
2020-07-08T22:45:21
2020-07-13T14:16:38
2020-07-13T14:16:38
The URL hashes are helpful for comparing results from other sources.
jbragg
https://github.com/huggingface/datasets/pull/357
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/357", "html_url": "https://github.com/huggingface/datasets/pull/357", "diff_url": "https://github.com/huggingface/datasets/pull/357.diff", "patch_url": "https://github.com/huggingface/datasets/pull/357.patch", "merged_at": "2020-07-13T14:16:38"...
true
653,537,388
356
Add text dataset
closed
[]
2020-07-08T19:21:53
2020-07-10T14:19:03
2020-07-10T14:19:03
Usage: ```python from nlp import load_dataset dset = load_dataset("text", data_files="/path/to/file.txt")["train"] ``` I created a dummy_data.zip which contains three files: `train.txt`, `test.txt`, `dev.txt`. Each of these contains two lines. It passes ```bash RUN_SLOW=1 pytest tests/test_dataset_common...
jarednielsen
https://github.com/huggingface/datasets/pull/356
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/356", "html_url": "https://github.com/huggingface/datasets/pull/356", "diff_url": "https://github.com/huggingface/datasets/pull/356.diff", "patch_url": "https://github.com/huggingface/datasets/pull/356.patch", "merged_at": "2020-07-10T14:19:03"...
true
653,451,013
355
can't load SNLI dataset
closed
[]
2020-07-08T16:54:14
2020-07-18T05:15:57
2020-07-15T07:59:01
`nlp` seems to load `snli` from some URL based on nlp.stanford.edu. This subdomain is frequently down -- including right now, when I'd like to load `snli` in a Colab notebook, but can't. Is there a plan to move these datasets to huggingface servers for a more stable solution? Btw, here's the stack trace: ``` ...
jxmorris12
https://github.com/huggingface/datasets/issues/355
null
false
653,357,617
354
More faiss control
closed
[]
2020-07-08T14:45:20
2020-07-09T09:54:54
2020-07-09T09:54:51
Allow users to specify a faiss index they created themselves, as sometimes indexes can be composite for examples
lhoestq
https://github.com/huggingface/datasets/pull/354
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/354", "html_url": "https://github.com/huggingface/datasets/pull/354", "diff_url": "https://github.com/huggingface/datasets/pull/354.diff", "patch_url": "https://github.com/huggingface/datasets/pull/354.patch", "merged_at": "2020-07-09T09:54:51"...
true
653,250,611
353
[Dataset requests] New datasets for Text Classification
open
[]
2020-07-08T12:17:58
2025-04-05T09:28:15
null
We are missing a few datasets for Text Classification which is an important field. Namely, it would be really nice to add: - [x] TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]** - #386 - [x] Yelp-5 - #...
thomwolf
https://github.com/huggingface/datasets/issues/353
null
false
653,128,883
352
🐛[BugFix]fix seqeval
closed
[]
2020-07-08T09:12:12
2020-07-16T08:26:46
2020-07-16T08:26:46
Fix seqeval process labels such as 'B', 'B-ARGM-LOC'
AlongWY
https://github.com/huggingface/datasets/pull/352
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/352", "html_url": "https://github.com/huggingface/datasets/pull/352", "diff_url": "https://github.com/huggingface/datasets/pull/352.diff", "patch_url": "https://github.com/huggingface/datasets/pull/352.patch", "merged_at": "2020-07-16T08:26:46"...
true
652,424,048
351
add pandas dataset
closed
[]
2020-07-07T15:38:07
2020-07-08T14:15:16
2020-07-08T14:15:15
Create a dataset from serialized pandas dataframes. Usage: ```python from nlp import load_dataset dset = load_dataset("pandas", data_files="df.pkl")["train"] ```
lhoestq
https://github.com/huggingface/datasets/pull/351
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/351", "html_url": "https://github.com/huggingface/datasets/pull/351", "diff_url": "https://github.com/huggingface/datasets/pull/351.diff", "patch_url": "https://github.com/huggingface/datasets/pull/351.patch", "merged_at": "2020-07-08T14:15:15"...
true
652,398,691
350
add from_pandas and from_dict
closed
[]
2020-07-07T15:03:53
2020-07-08T14:14:33
2020-07-08T14:14:32
I added two new methods to the `Dataset` class: - `from_pandas()` to create a dataset from a pandas dataframe - `from_dict()` to create a dataset from a dictionary (keys = columns) It uses the `pa.Table.from_pandas` and `pa.Table.from_pydict` funcitons to do so. It is also possible to specify the features types v...
lhoestq
https://github.com/huggingface/datasets/pull/350
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/350", "html_url": "https://github.com/huggingface/datasets/pull/350", "diff_url": "https://github.com/huggingface/datasets/pull/350.diff", "patch_url": "https://github.com/huggingface/datasets/pull/350.patch", "merged_at": "2020-07-08T14:14:32"...
true
652,231,571
349
Hyperpartisan news detection
closed
[]
2020-07-07T11:06:37
2020-07-07T20:47:27
2020-07-07T14:57:11
Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display. Implementation notes: - As with many PAN tasks, the data is hosted on [Zenodo](https://zenodo.org/record/1489920) and must be requested before...
ghomasHudson
https://github.com/huggingface/datasets/pull/349
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/349", "html_url": "https://github.com/huggingface/datasets/pull/349", "diff_url": "https://github.com/huggingface/datasets/pull/349.diff", "patch_url": "https://github.com/huggingface/datasets/pull/349.patch", "merged_at": "2020-07-07T14:57:11"...
true
652,158,308
348
Add OSCAR dataset
closed
[]
2020-07-07T09:22:07
2021-05-03T22:07:08
2021-02-09T10:19:19
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅 Thanks!
pjox
https://github.com/huggingface/datasets/pull/348
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/348", "html_url": "https://github.com/huggingface/datasets/pull/348", "diff_url": "https://github.com/huggingface/datasets/pull/348.diff", "patch_url": "https://github.com/huggingface/datasets/pull/348.patch", "merged_at": null }
true
652,106,567
347
'cp950' codec error from load_dataset('xtreme', 'tydiqa')
closed
[]
2020-07-07T08:14:23
2020-09-07T14:51:45
2020-09-07T14:51:45
![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps : https://www.python.org/dev/peps/pep-0263/ I gues...
cosmeowpawlitan
https://github.com/huggingface/datasets/issues/347
null
false
652,044,151
346
Add emotion dataset
closed
[]
2020-07-07T06:35:41
2022-05-30T15:16:44
2020-07-13T14:39:38
Hello 🤗 team! I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/me...
lewtun
https://github.com/huggingface/datasets/pull/346
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/346", "html_url": "https://github.com/huggingface/datasets/pull/346", "diff_url": "https://github.com/huggingface/datasets/pull/346.diff", "patch_url": "https://github.com/huggingface/datasets/pull/346.patch", "merged_at": "2020-07-13T14:39:38"...
true
651,761,201
345
Supporting documents in ELI5
closed
[]
2020-07-06T19:14:13
2020-10-27T15:38:45
2020-10-27T15:38:45
I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other more apt metaphor. According to ...
saverymax
https://github.com/huggingface/datasets/issues/345
null
false
651,495,246
344
Search qa
closed
[]
2020-07-06T12:23:16
2020-07-16T08:58:16
2020-07-16T08:58:16
This PR adds the Search QA dataset used in **SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine**. The dataset has the following config name: - raw_jeopardy: raw data - train_test_val: which is the splitted version #336
mariamabarham
https://github.com/huggingface/datasets/pull/344
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/344", "html_url": "https://github.com/huggingface/datasets/pull/344", "diff_url": "https://github.com/huggingface/datasets/pull/344.diff", "patch_url": "https://github.com/huggingface/datasets/pull/344.patch", "merged_at": null }
true
651,419,630
343
Fix nested tensorflow format
closed
[]
2020-07-06T10:13:45
2020-07-06T13:11:52
2020-07-06T13:11:51
In #339 and #337 we are thinking about adding a way to export datasets to tfrecords. However I noticed that it was not possible to do `dset.set_format("tensorflow")` on datasets with nested features like `squad`. I fixed that using a nested map operations to convert features to `tf.ragged.constant`. I also added ...
lhoestq
https://github.com/huggingface/datasets/pull/343
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/343", "html_url": "https://github.com/huggingface/datasets/pull/343", "diff_url": "https://github.com/huggingface/datasets/pull/343.diff", "patch_url": "https://github.com/huggingface/datasets/pull/343.patch", "merged_at": "2020-07-06T13:11:51"...
true
651,333,194
342
Features should be updated when `map()` changes schema
closed
[]
2020-07-06T08:03:23
2020-07-23T10:15:16
2020-07-23T10:15:16
`dataset.map()` can change the schema and column names. We should update the features in this case (with what is possible to infer).
thomwolf
https://github.com/huggingface/datasets/issues/342
null
false
650,611,969
341
add fever dataset
closed
[]
2020-07-03T13:53:07
2020-07-06T13:03:48
2020-07-06T13:03:47
This PR add the FEVER dataset https://fever.ai/ used in with the paper: FEVER: a large-scale dataset for Fact Extraction and VERification (https://arxiv.org/pdf/1803.05355.pdf). #336
mariamabarham
https://github.com/huggingface/datasets/pull/341
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/341", "html_url": "https://github.com/huggingface/datasets/pull/341", "diff_url": "https://github.com/huggingface/datasets/pull/341.diff", "patch_url": "https://github.com/huggingface/datasets/pull/341.patch", "merged_at": "2020-07-06T13:03:47"...
true
650,533,920
340
Update cfq.py
closed
[]
2020-07-03T11:23:19
2020-07-03T12:33:50
2020-07-03T12:33:50
Make the dataset name consistent with in the paper: Compositional Freebase Question => Compositional Freebase Questions.
brainshawn
https://github.com/huggingface/datasets/pull/340
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/340", "html_url": "https://github.com/huggingface/datasets/pull/340", "diff_url": "https://github.com/huggingface/datasets/pull/340.diff", "patch_url": "https://github.com/huggingface/datasets/pull/340.patch", "merged_at": "2020-07-03T12:33:50"...
true
650,156,468
339
Add dataset.export() to TFRecords
closed
[]
2020-07-02T19:26:27
2020-07-22T09:16:12
2020-07-22T09:16:12
Fixes https://github.com/huggingface/nlp/issues/337 Some design decisions: - Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitt...
jarednielsen
https://github.com/huggingface/datasets/pull/339
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/339", "html_url": "https://github.com/huggingface/datasets/pull/339", "diff_url": "https://github.com/huggingface/datasets/pull/339.diff", "patch_url": "https://github.com/huggingface/datasets/pull/339.patch", "merged_at": "2020-07-22T09:16:11"...
true
650,057,253
338
Run `make style`
closed
[]
2020-07-02T16:19:47
2020-07-02T18:03:10
2020-07-02T18:03:10
These files get changed when I run `make style` on an unrelated PR. Upstreaming these changes so development on a different branch can be easier.
jarednielsen
https://github.com/huggingface/datasets/pull/338
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/338", "html_url": "https://github.com/huggingface/datasets/pull/338", "diff_url": "https://github.com/huggingface/datasets/pull/338.diff", "patch_url": "https://github.com/huggingface/datasets/pull/338.patch", "merged_at": "2020-07-02T18:03:10"...
true
650,035,887
337
[Feature request] Export Arrow dataset to TFRecords
closed
[]
2020-07-02T15:47:12
2020-07-22T09:16:12
2020-07-22T09:16:12
The TFRecord generation process is error-prone and requires complex separate Python scripts to download and preprocess the data. I propose to combine the user-friendly features of `nlp` with the speed and efficiency of TFRecords. Sample API: ```python # use these existing methods ds = load_dataset("wikitext", "wik...
jarednielsen
https://github.com/huggingface/datasets/issues/337
null
false
649,914,203
336
[Dataset requests] New datasets for Open Question Answering
closed
[]
2020-07-02T13:03:03
2020-07-16T09:04:22
2020-07-16T09:04:22
We are still a few datasets missing for Open-Question Answering which is currently a field in strong development. Namely, it would be really nice to add: - WebQuestions (Berant et al., 2013) [done] - CuratedTrec (Baudis et al. 2015) [not open-source] - MS-MARCO (NGuyen et al. 2016) [done] - SearchQA (Dunn et al....
thomwolf
https://github.com/huggingface/datasets/issues/336
null
false
649,765,179
335
BioMRC Dataset presented in BioNLP 2020 ACL Workshop
closed
[]
2020-07-02T09:03:41
2020-07-15T08:02:07
2020-07-15T08:02:07
PetrosStav
https://github.com/huggingface/datasets/pull/335
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/335", "html_url": "https://github.com/huggingface/datasets/pull/335", "diff_url": "https://github.com/huggingface/datasets/pull/335.diff", "patch_url": "https://github.com/huggingface/datasets/pull/335.patch", "merged_at": "2020-07-15T08:02:07"...
true
649,661,791
334
Add dataset.shard() method
closed
[]
2020-07-02T06:05:19
2020-07-06T12:35:36
2020-07-06T12:35:36
Fixes https://github.com/huggingface/nlp/issues/312
jarednielsen
https://github.com/huggingface/datasets/pull/334
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/334", "html_url": "https://github.com/huggingface/datasets/pull/334", "diff_url": "https://github.com/huggingface/datasets/pull/334.diff", "patch_url": "https://github.com/huggingface/datasets/pull/334.patch", "merged_at": "2020-07-06T12:35:36"...
true
649,236,516
333
fix variable name typo
closed
[]
2020-07-01T19:13:50
2020-07-24T15:43:31
2020-07-24T08:32:16
stas00
https://github.com/huggingface/datasets/pull/333
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/333", "html_url": "https://github.com/huggingface/datasets/pull/333", "diff_url": "https://github.com/huggingface/datasets/pull/333.diff", "patch_url": "https://github.com/huggingface/datasets/pull/333.patch", "merged_at": null }
true
649,140,135
332
Add wiki_dpr
closed
[]
2020-07-01T17:12:00
2020-07-06T12:21:17
2020-07-06T12:21:16
Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder. Note on the implementation: - There are two configs: with and without the embeddings (73G...
lhoestq
https://github.com/huggingface/datasets/pull/332
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/332", "html_url": "https://github.com/huggingface/datasets/pull/332", "diff_url": "https://github.com/huggingface/datasets/pull/332.diff", "patch_url": "https://github.com/huggingface/datasets/pull/332.patch", "merged_at": "2020-07-06T12:21:16"...
true
648,533,199
331
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError`
closed
[]
2020-06-30T22:21:33
2020-07-09T13:03:40
2020-07-09T13:03:40
``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0... Traceback (most recent call last): File "<stdin>", line 1, in...
jxmorris12
https://github.com/huggingface/datasets/issues/331
null
false
648,525,720
330
Doc red
closed
[]
2020-06-30T22:05:31
2020-07-06T12:10:39
2020-07-05T12:27:29
Adding [DocRED](https://github.com/thunlp/DocRED) - a relation extraction dataset which tests document-level RE. A few implementation notes: - There are 2 separate versions of the training set - *annotated* and *distant*. Instead of `nlp.Split.Train` I've used the splits `"train_annotated"` and `"train_distant"` to ...
ghomasHudson
https://github.com/huggingface/datasets/pull/330
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/330", "html_url": "https://github.com/huggingface/datasets/pull/330", "diff_url": "https://github.com/huggingface/datasets/pull/330.diff", "patch_url": "https://github.com/huggingface/datasets/pull/330.patch", "merged_at": "2020-07-05T12:27:29"...
true
648,446,979
329
[Bug] FileLock dependency incompatible with filesystem
closed
[]
2020-06-30T19:45:31
2024-12-26T15:13:39
2020-06-30T21:33:06
I'm downloading a dataset successfully with `load_dataset("wikitext", "wikitext-2-raw-v1")` But when I attempt to cache it on an external volume, it hangs indefinitely: `load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount` The filesystem when hanging looks like thi...
jarednielsen
https://github.com/huggingface/datasets/issues/329
null
false
648,326,841
328
Fork dataset
closed
[]
2020-06-30T16:42:53
2020-07-06T21:43:59
2020-07-06T21:43:59
We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow parses raw text and...
timothyjlaurent
https://github.com/huggingface/datasets/issues/328
null
false
648,312,858
327
set seed for suffling tests
closed
[]
2020-06-30T16:21:34
2020-07-02T08:34:05
2020-07-02T08:34:04
Some tests were randomly failing because of a missing seed in a test for `train_test_split(shuffle=True)`
lhoestq
https://github.com/huggingface/datasets/pull/327
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/327", "html_url": "https://github.com/huggingface/datasets/pull/327", "diff_url": "https://github.com/huggingface/datasets/pull/327.diff", "patch_url": "https://github.com/huggingface/datasets/pull/327.patch", "merged_at": "2020-07-02T08:34:04"...
true
648,126,103
326
Large dataset in Squad2-format
closed
[]
2020-06-30T12:18:59
2020-07-09T09:01:50
2020-07-09T09:01:50
At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contexts: 1.047.671 - questions: 1.677...
flozi00
https://github.com/huggingface/datasets/issues/326
null
false
647,601,592
325
Add SQuADShifts dataset
closed
[]
2020-06-29T19:11:16
2020-06-30T17:07:31
2020-06-30T17:07:31
This PR adds the four new variants of the SQuAD dataset used in [The Effect of Natural Distribution Shift on Question Answering Models](https://arxiv.org/abs/2004.14444) to facilitate evaluating model robustness to distribution shift.
millerjohnp
https://github.com/huggingface/datasets/pull/325
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/325", "html_url": "https://github.com/huggingface/datasets/pull/325", "diff_url": "https://github.com/huggingface/datasets/pull/325.diff", "patch_url": "https://github.com/huggingface/datasets/pull/325.patch", "merged_at": "2020-06-30T17:07:31"...
true
647,525,725
324
Error when calculating glue score
closed
[]
2020-06-29T16:53:48
2020-07-09T09:13:34
2020-07-09T09:13:34
I was trying glue score along with other metrics here. But glue gives me this error; ``` import nlp glue_metric = nlp.load_metric('glue',name="cola") glue_score = glue_metric.compute(predictions, references) ``` ``` --------------------------------------------------------------------------- --------------...
D-i-l-r-u-k-s-h-i
https://github.com/huggingface/datasets/issues/324
null
false
647,521,308
323
Add package path to sys when downloading package as github archive
closed
[]
2020-06-29T16:46:01
2020-07-30T14:00:23
2020-07-30T14:00:23
This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh) @thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importli...
yjernite
https://github.com/huggingface/datasets/pull/323
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/323", "html_url": "https://github.com/huggingface/datasets/pull/323", "diff_url": "https://github.com/huggingface/datasets/pull/323.diff", "patch_url": "https://github.com/huggingface/datasets/pull/323.patch", "merged_at": null }
true
647,483,850
322
output nested dict in get_nearest_examples
closed
[]
2020-06-29T15:47:47
2020-07-02T08:33:33
2020-07-02T08:33:32
As we are using a columnar format like arrow as the backend for datasets, we expect to have a dictionary of columns when we slice a dataset like in this example: ```python my_examples = dataset[0:10] print(type(my_examples)) # >>> dict print(my_examples["my_column"][0] # >>> this is the first element of the colum...
lhoestq
https://github.com/huggingface/datasets/pull/322
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/322", "html_url": "https://github.com/huggingface/datasets/pull/322", "diff_url": "https://github.com/huggingface/datasets/pull/322.diff", "patch_url": "https://github.com/huggingface/datasets/pull/322.patch", "merged_at": "2020-07-02T08:33:32"...
true
647,271,526
321
ERROR:root:mwparserfromhell
closed
[]
2020-06-29T11:10:43
2022-02-14T15:21:46
2022-02-14T15:21:46
Hi, I am trying to download some wikipedia data but I got this error for spanish "es" (but there are maybe some others languages which have the same error I haven't tried all of them ). `ERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token sta...
Shiro-LK
https://github.com/huggingface/datasets/issues/321
null
false
647,188,167
320
Blog Authorship Corpus, Non Matching Splits Sizes Error, nlp viewer
closed
[]
2020-06-29T07:36:35
2020-06-29T14:44:42
2020-06-29T14:44:42
Selecting `blog_authorship_corpus` in the nlp viewer throws the following error: ``` NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dat...
mariamabarham
https://github.com/huggingface/datasets/issues/320
null
false