id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
718,947,700
724
need to redirect /nlp to /datasets and remove outdated info
closed
[]
2020-10-11T23:12:12
2020-10-14T17:00:12
2020-10-14T17:00:12
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all should probably redirect to: https://huggingface.co/datasets/wikihow also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had t...
stas00
https://github.com/huggingface/datasets/issues/724
null
false
718,926,723
723
Adding pseudo-labels to datasets
closed
[]
2020-10-11T21:05:45
2021-08-03T05:11:51
2021-08-03T05:11:51
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo. Since pseudo-labels are just a large model's generations on an existing dataset, what is ...
sshleifer
https://github.com/huggingface/datasets/issues/723
null
false
718,689,117
722
datasets(RWTH-PHOENIX-Weather 2014 T): add initial loading script
closed
[]
2020-10-10T19:44:08
2022-09-30T14:53:37
2022-09-30T14:53:37
This is the first sign language dataset in this repo as far as I know. Following an old issue I opened https://github.com/huggingface/datasets/issues/302. I added the dataset official REAMDE file, but I see it's not very standard, so it can be removed.
AmitMY
https://github.com/huggingface/datasets/pull/722
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/722", "html_url": "https://github.com/huggingface/datasets/pull/722", "diff_url": "https://github.com/huggingface/datasets/pull/722.diff", "patch_url": "https://github.com/huggingface/datasets/pull/722.patch", "merged_at": null }
true
718,647,147
721
feat(dl_manager): add support for ftp downloads
closed
[]
2020-10-10T15:50:20
2022-02-15T10:44:44
2022-02-15T10:44:43
I am working on a new dataset (#302) and encounter a problem downloading it. ```python # This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/ _URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz" dl_manager.do...
AmitMY
https://github.com/huggingface/datasets/issues/721
null
false
716,581,266
720
OSError: Cannot find data file when not using the dummy dataset in RAG
closed
[]
2020-10-07T14:27:13
2020-12-23T14:04:31
2020-12-23T14:04:31
## Environment info transformers version: 3.3.1 Platform: Linux-4.19 Python version: 3.7.7 PyTorch version (GPU?): 1.6.0 Tensorflow version (GPU?): No Using GPU in script?: Yes Using distributed or parallel set-up in script?: No ## To reproduce Steps to reproduce the behaviour...
josemlopez
https://github.com/huggingface/datasets/issues/720
null
false
716,492,263
719
Fix train_test_split output format
closed
[]
2020-10-07T12:39:01
2020-10-07T13:38:08
2020-10-07T13:38:06
There was an issue in the `transmit_format` wrapper that returned bad formats when using train_test_split. This was due to `column_names` being handled as a List[str] instead of Dict[str, List[str]] when the dataset transform (train_test_split) returns a DatasetDict (one set of column names per split). This should ...
lhoestq
https://github.com/huggingface/datasets/pull/719
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/719", "html_url": "https://github.com/huggingface/datasets/pull/719", "diff_url": "https://github.com/huggingface/datasets/pull/719.diff", "patch_url": "https://github.com/huggingface/datasets/pull/719.patch", "merged_at": "2020-10-07T13:38:06"...
true
715,694,709
718
Don't use tqdm 4.50.0
closed
[]
2020-10-06T13:45:53
2020-10-06T13:49:24
2020-10-06T13:49:22
tqdm 4.50.0 introduced permission errors on windows see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111) for the error details. For now I just added `<4.50.0` in the setup.py Hopefully we can find what's wrong with this version soon
lhoestq
https://github.com/huggingface/datasets/pull/718
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/718", "html_url": "https://github.com/huggingface/datasets/pull/718", "diff_url": "https://github.com/huggingface/datasets/pull/718.diff", "patch_url": "https://github.com/huggingface/datasets/pull/718.patch", "merged_at": "2020-10-06T13:49:22"...
true
714,959,268
717
Fixes #712 Error in the Overview.ipynb notebook
closed
[]
2020-10-05T15:50:41
2020-10-06T06:31:43
2020-10-05T16:25:41
Fixes #712 Error in the Overview.ipynb notebook by adding `with_details=True` parameter to `list_datasets` function in Cell 3 of **overview** notebook
subhrm
https://github.com/huggingface/datasets/pull/717
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/717", "html_url": "https://github.com/huggingface/datasets/pull/717", "diff_url": "https://github.com/huggingface/datasets/pull/717.diff", "patch_url": "https://github.com/huggingface/datasets/pull/717.patch", "merged_at": "2020-10-05T16:25:40"...
true
714,952,888
716
Fixes #712 Attribute error in cell 3 of the overview notebook
closed
[]
2020-10-05T15:42:09
2020-10-05T15:46:38
2020-10-05T15:46:32
Fixes the Attribute error in cell 3 of the overview notebook
subhrm
https://github.com/huggingface/datasets/pull/716
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/716", "html_url": "https://github.com/huggingface/datasets/pull/716", "diff_url": "https://github.com/huggingface/datasets/pull/716.diff", "patch_url": "https://github.com/huggingface/datasets/pull/716.patch", "merged_at": null }
true
714,690,192
715
Use python read for text dataset
closed
[]
2020-10-05T09:47:55
2020-10-05T13:13:18
2020-10-05T13:13:17
As mentioned in #622 the pandas reader used for text dataset doesn't work properly when there are \r characters in the text file. Instead I switched to pure python using `open` and `read`. From my benchmark on a 100MB text file, it's the same speed as the previous pandas reader.
lhoestq
https://github.com/huggingface/datasets/pull/715
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/715", "html_url": "https://github.com/huggingface/datasets/pull/715", "diff_url": "https://github.com/huggingface/datasets/pull/715.diff", "patch_url": "https://github.com/huggingface/datasets/pull/715.patch", "merged_at": "2020-10-05T13:13:16"...
true
714,487,881
714
Add the official dependabot implementation
closed
[]
2020-10-05T03:49:45
2020-10-12T11:49:21
2020-10-12T11:49:21
This will keep dependencies up to date. This will require a pr label `dependencies` being created in order to function correctly.
ALazyMeme
https://github.com/huggingface/datasets/pull/714
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/714", "html_url": "https://github.com/huggingface/datasets/pull/714", "diff_url": "https://github.com/huggingface/datasets/pull/714.diff", "patch_url": "https://github.com/huggingface/datasets/pull/714.patch", "merged_at": null }
true
714,475,732
713
Fix reading text files with carriage return symbols
closed
[]
2020-10-05T03:07:03
2020-10-09T05:58:25
2020-10-05T13:49:29
The new pandas-based text reader isn't able to work properly with files that contain carriage return symbols (`\r`). It fails with the following error message: ``` ... File "pandas/_libs/parsers.pyx", line 847, in pandas._libs.parsers.TextReader.read File "pandas/_libs/parsers.pyx", line 874, in pandas._l...
mozharovsky
https://github.com/huggingface/datasets/pull/713
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/713", "html_url": "https://github.com/huggingface/datasets/pull/713", "diff_url": "https://github.com/huggingface/datasets/pull/713.diff", "patch_url": "https://github.com/huggingface/datasets/pull/713.patch", "merged_at": null }
true
714,242,316
712
Error in the notebooks/Overview.ipynb notebook
closed
[]
2020-10-04T05:58:31
2020-10-05T16:25:40
2020-10-05T16:25:40
Hi, I got the following error in **cell number 3** while exploring the **Overview.ipynb** notebook in google colab. I used the [link ](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) provided in the main README file to open it in colab. ```python # You can acc...
subhrm
https://github.com/huggingface/datasets/issues/712
null
false
714,236,408
711
New Update bertscore.py
closed
[]
2020-10-04T05:13:09
2020-10-05T16:26:51
2020-10-05T16:26:51
DayasagarRSalian
https://github.com/huggingface/datasets/pull/711
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/711", "html_url": "https://github.com/huggingface/datasets/pull/711", "diff_url": "https://github.com/huggingface/datasets/pull/711.diff", "patch_url": "https://github.com/huggingface/datasets/pull/711.patch", "merged_at": "2020-10-05T16:26:51"...
true
714,186,999
710
fix README typos/ consistency
closed
[]
2020-10-03T22:20:56
2020-10-17T09:52:45
2020-10-17T09:52:45
discdiver
https://github.com/huggingface/datasets/pull/710
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/710", "html_url": "https://github.com/huggingface/datasets/pull/710", "diff_url": "https://github.com/huggingface/datasets/pull/710.diff", "patch_url": "https://github.com/huggingface/datasets/pull/710.patch", "merged_at": "2020-10-17T09:52:45"...
true
714,067,902
709
How to use similarity settings other then "BM25" in Elasticsearch index ?
closed
[]
2020-10-03T11:18:49
2022-10-04T17:19:37
2022-10-04T17:19:37
**QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?** **ES Reference** https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html **HF doc reference:** https://huggingface.co/docs/datasets/faiss_and_ea.html **context :** =...
nsankar
https://github.com/huggingface/datasets/issues/709
null
false
714,020,953
708
Datasets performance slow? - 6.4x slower than in memory dataset
closed
[]
2020-10-03T06:44:07
2021-02-12T14:13:28
2021-02-12T14:13:28
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower....
eugeneware
https://github.com/huggingface/datasets/issues/708
null
false
713,954,666
707
Requirements should specify pyarrow<1
closed
[]
2020-10-02T23:39:39
2020-12-04T08:22:39
2020-10-04T20:50:28
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error, ``` module 'pyarrow' has no attribute 'PyExtensionType' ``` I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinni...
mathcass
https://github.com/huggingface/datasets/issues/707
null
false
713,721,959
706
Fix config creation for data files with NamedSplit
closed
[]
2020-10-02T15:46:49
2020-10-05T08:15:00
2020-10-05T08:14:59
During config creation, we need to iterate through the data files of all the splits to compute a hash. To make sure the hash is unique given a certain combination of files/splits, we sort the split names. However the `NamedSplit` objects can't be passed to `sorted` and currently it raises an error: we need to sort th...
lhoestq
https://github.com/huggingface/datasets/pull/706
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/706", "html_url": "https://github.com/huggingface/datasets/pull/706", "diff_url": "https://github.com/huggingface/datasets/pull/706.diff", "patch_url": "https://github.com/huggingface/datasets/pull/706.patch", "merged_at": "2020-10-05T08:14:59"...
true
713,709,100
705
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
closed
[]
2020-10-02T15:27:55
2020-10-05T08:14:59
2020-10-05T08:14:59
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 (installed from master) - `datasets` version: 1.0.2 (installed as a dependency from transformers) ...
pvcastro
https://github.com/huggingface/datasets/issues/705
null
false
713,572,556
704
Fix remote tests for new datasets
closed
[]
2020-10-02T12:08:04
2020-10-02T12:12:02
2020-10-02T12:12:01
When adding a new dataset, the remote tests fail because they try to get the new dataset from the master branch (i.e., where the dataset doesn't exist yet) To fix that I reverted to the use of the HF API that fetch the available datasets on S3 that is synced with the master branch
lhoestq
https://github.com/huggingface/datasets/pull/704
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/704", "html_url": "https://github.com/huggingface/datasets/pull/704", "diff_url": "https://github.com/huggingface/datasets/pull/704.diff", "patch_url": "https://github.com/huggingface/datasets/pull/704.patch", "merged_at": "2020-10-02T12:12:01"...
true
713,559,718
703
Add hotpot QA
closed
[]
2020-10-02T11:44:28
2020-10-02T12:54:41
2020-10-02T12:54:41
Added the [HotpotQA](https://github.com/hotpotqa/hotpot) multi-hop question answering dataset.
ghomasHudson
https://github.com/huggingface/datasets/pull/703
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/703", "html_url": "https://github.com/huggingface/datasets/pull/703", "diff_url": "https://github.com/huggingface/datasets/pull/703.diff", "patch_url": "https://github.com/huggingface/datasets/pull/703.patch", "merged_at": "2020-10-02T12:54:40"...
true
713,499,628
702
Complete rouge kwargs
closed
[]
2020-10-02T09:59:01
2020-10-02T10:11:04
2020-10-02T10:11:03
In #701 we noticed that some kwargs were missing for rouge
lhoestq
https://github.com/huggingface/datasets/pull/702
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/702", "html_url": "https://github.com/huggingface/datasets/pull/702", "diff_url": "https://github.com/huggingface/datasets/pull/702.diff", "patch_url": "https://github.com/huggingface/datasets/pull/702.patch", "merged_at": "2020-10-02T10:11:03"...
true
713,485,757
701
Add rouge 2 and rouge Lsum to rouge metric outputs
closed
[]
2020-10-02T09:35:46
2020-10-02T09:55:14
2020-10-02T09:52:18
Continuation of #700 Rouge 2 and Rouge Lsum were missing in Rouge's outputs. Rouge Lsum is also useful to evaluate Rouge L for sentences with `\n` Fix #617
lhoestq
https://github.com/huggingface/datasets/pull/701
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/701", "html_url": "https://github.com/huggingface/datasets/pull/701", "diff_url": "https://github.com/huggingface/datasets/pull/701.diff", "patch_url": "https://github.com/huggingface/datasets/pull/701.patch", "merged_at": "2020-10-02T09:52:18"...
true
713,450,295
700
Add rouge-2 in rouge_types for metric calculation
closed
[]
2020-10-02T08:36:45
2020-10-02T11:08:49
2020-10-02T09:59:05
The description of the ROUGE metric says, ``` _KWARGS_DESCRIPTION = """ Calculates average rouge scores for a list of hypotheses and references Args: predictions: list of predictions to score. Each predictions should be a string with tokens separated by spaces. references: list of reference for ...
Shashi456
https://github.com/huggingface/datasets/pull/700
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/700", "html_url": "https://github.com/huggingface/datasets/pull/700", "diff_url": "https://github.com/huggingface/datasets/pull/700.diff", "patch_url": "https://github.com/huggingface/datasets/pull/700.patch", "merged_at": null }
true
713,395,642
699
XNLI dataset is not loading
closed
[]
2020-10-02T06:53:16
2020-10-03T17:45:52
2020-10-03T17:43:37
`dataset = datasets.load_dataset(path='xnli')` showing below error ``` /opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 36 if len(bad_urls) > 0: 37 error_msg = "Checksums didn't match" + for_verifi...
imadarsh1001
https://github.com/huggingface/datasets/issues/699
null
false
712,979,029
697
Update README.md
closed
[]
2020-10-01T16:02:42
2020-10-01T16:12:00
2020-10-01T16:12:00
Hey I was just telling my subscribers to check out your repositories Thank you
bishug
https://github.com/huggingface/datasets/pull/697
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/697", "html_url": "https://github.com/huggingface/datasets/pull/697", "diff_url": "https://github.com/huggingface/datasets/pull/697.diff", "patch_url": "https://github.com/huggingface/datasets/pull/697.patch", "merged_at": null }
true
712,942,977
696
Elasticsearch index docs
closed
[]
2020-10-01T15:18:58
2020-10-02T07:48:19
2020-10-02T07:48:18
I added the docs for ES indexes. I also added a `load_elasticsearch_index` method to load an index that has already been built. I checked the tests for the ES index and we have tests that mock ElasticSearch. I think this is good for now but at some point it would be cool to have an end-to-end test with a real ES...
lhoestq
https://github.com/huggingface/datasets/pull/696
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/696", "html_url": "https://github.com/huggingface/datasets/pull/696", "diff_url": "https://github.com/huggingface/datasets/pull/696.diff", "patch_url": "https://github.com/huggingface/datasets/pull/696.patch", "merged_at": "2020-10-02T07:48:18"...
true
712,843,949
695
Update XNLI download link
closed
[]
2020-10-01T13:27:22
2020-10-01T14:01:15
2020-10-01T14:01:14
The old link isn't working anymore. I updated it with the new official link. Fix #690
lhoestq
https://github.com/huggingface/datasets/pull/695
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/695", "html_url": "https://github.com/huggingface/datasets/pull/695", "diff_url": "https://github.com/huggingface/datasets/pull/695.diff", "patch_url": "https://github.com/huggingface/datasets/pull/695.patch", "merged_at": "2020-10-01T14:01:14"...
true
712,827,751
694
Use GitHub instead of aws in remote dataset tests
closed
[]
2020-10-01T13:07:50
2020-10-02T07:47:28
2020-10-02T07:47:27
Recently we switched from aws s3 to github to download dataset scripts. However in the tests, the dummy data were still downloaded from s3. So I changed that to download them from github instead, in the MockDownloadManager. Moreover I noticed that `anli`'s dummy data were quite heavy (18MB compressed, i.e. the ent...
lhoestq
https://github.com/huggingface/datasets/pull/694
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/694", "html_url": "https://github.com/huggingface/datasets/pull/694", "diff_url": "https://github.com/huggingface/datasets/pull/694.diff", "patch_url": "https://github.com/huggingface/datasets/pull/694.patch", "merged_at": "2020-10-02T07:47:26"...
true
712,822,200
693
Rachel ker add dataset/mlsum
closed
[]
2020-10-01T13:01:10
2023-09-24T09:48:23
2020-10-01T17:01:13
.
pdhg
https://github.com/huggingface/datasets/pull/693
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/693", "html_url": "https://github.com/huggingface/datasets/pull/693", "diff_url": "https://github.com/huggingface/datasets/pull/693.diff", "patch_url": "https://github.com/huggingface/datasets/pull/693.patch", "merged_at": null }
true
712,818,968
692
Update README.md
closed
[]
2020-10-01T12:57:22
2020-10-02T11:01:59
2020-10-02T11:01:59
mayank1897
https://github.com/huggingface/datasets/pull/692
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/692", "html_url": "https://github.com/huggingface/datasets/pull/692", "diff_url": "https://github.com/huggingface/datasets/pull/692.diff", "patch_url": "https://github.com/huggingface/datasets/pull/692.patch", "merged_at": null }
true
712,389,499
691
Add UI filter to filter datasets based on task
closed
[]
2020-10-01T00:56:18
2022-02-15T10:46:50
2022-02-15T10:46:50
This is great work, so huge shoutout to contributors and huggingface. The [/nlp/viewer](https://huggingface.co/nlp/viewer/) is great and the [/datasets](https://huggingface.co/datasets) page is great. I was wondering if in both or either places we can have a filter that selects if a dataset is good for the following...
praateekmahajan
https://github.com/huggingface/datasets/issues/691
null
false
712,150,321
690
XNLI dataset: NonMatchingChecksumError
closed
[]
2020-09-30T17:50:03
2020-10-01T17:15:08
2020-10-01T14:01:14
Hi, I tried to download "xnli" dataset in colab using `xnli = load_dataset(path='xnli')` but got 'NonMatchingChecksumError' error `NonMatchingChecksumError Traceback (most recent call last) <ipython-input-27-a87bedc82eeb> in <module>() ----> 1 xnli = load_dataset(path='xnli') 3 frames /usr...
xiey1
https://github.com/huggingface/datasets/issues/690
null
false
712,095,262
689
Switch to pandas reader for text dataset
closed
[]
2020-09-30T16:28:12
2020-09-30T16:45:32
2020-09-30T16:45:31
Following the discussion in #622 , it appears that there's no appropriate ways to use the payrrow csv reader to read text files because of the separator. In this PR I switched to pandas to read the file. Moreover pandas allows to read the file by chunk, which means that you can build the arrow dataset from a text...
lhoestq
https://github.com/huggingface/datasets/pull/689
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/689", "html_url": "https://github.com/huggingface/datasets/pull/689", "diff_url": "https://github.com/huggingface/datasets/pull/689.diff", "patch_url": "https://github.com/huggingface/datasets/pull/689.patch", "merged_at": "2020-09-30T16:45:31"...
true
711,804,828
688
Disable tokenizers parallelism in multiprocessed map
closed
[]
2020-09-30T09:53:34
2020-10-01T08:45:46
2020-10-01T08:45:45
It was reported in #620 that using multiprocessing with a tokenizers shows this message: ``` The current process just got forked. Disabling parallelism to avoid deadlocks... To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false) ``` This message is shown when TOKENIZERS_PARALLELISM is...
lhoestq
https://github.com/huggingface/datasets/pull/688
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/688", "html_url": "https://github.com/huggingface/datasets/pull/688", "diff_url": "https://github.com/huggingface/datasets/pull/688.diff", "patch_url": "https://github.com/huggingface/datasets/pull/688.patch", "merged_at": "2020-10-01T08:45:45"...
true
711,664,810
687
`ArrowInvalid` occurs while running `Dataset.map()` function
closed
[]
2020-09-30T06:16:50
2020-09-30T09:53:03
2020-09-30T09:53:03
It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error. Code: ```python # train_ds = Dataset(features: { # 'title': Value(dtype='string', id=None), # 'score': Value(dtype='float64', id=Non...
peinan
https://github.com/huggingface/datasets/issues/687
null
false
711,385,739
686
Dataset browser url is still https://huggingface.co/nlp/viewer/
closed
[]
2020-09-29T19:21:52
2021-01-08T18:29:26
2021-01-08T18:29:26
Might be worth updating to https://huggingface.co/datasets/viewer/
jarednielsen
https://github.com/huggingface/datasets/issues/686
null
false
711,182,185
685
Add features parameter to CSV
closed
[]
2020-09-29T14:43:36
2020-09-30T08:39:56
2020-09-30T08:39:54
Add support for the `features` parameter when loading a csv dataset: ```python from datasets import load_dataset, Features features = Features({...}) csv_dataset = load_dataset("csv", data_files=["path/to/my/file.csv"], features=features) ``` I added tests to make sure that it is also compatible with the ca...
lhoestq
https://github.com/huggingface/datasets/pull/685
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/685", "html_url": "https://github.com/huggingface/datasets/pull/685", "diff_url": "https://github.com/huggingface/datasets/pull/685.diff", "patch_url": "https://github.com/huggingface/datasets/pull/685.patch", "merged_at": "2020-09-30T08:39:54"...
true
711,080,947
684
Fix column order issue in cast
closed
[]
2020-09-29T12:49:13
2020-09-29T15:56:46
2020-09-29T15:56:45
Previously, the order of the columns in the features passes to `cast_` mattered. However even though features passed to `cast_` had the same order as the dataset features, it could fail because the schema that was built was always in alphabetical order. This issue was reported by @lewtun in #623 To fix that I fi...
lhoestq
https://github.com/huggingface/datasets/pull/684
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/684", "html_url": "https://github.com/huggingface/datasets/pull/684", "diff_url": "https://github.com/huggingface/datasets/pull/684.diff", "patch_url": "https://github.com/huggingface/datasets/pull/684.patch", "merged_at": "2020-09-29T15:56:45"...
true
710,942,704
683
Fix wrong delimiter in text dataset
closed
[]
2020-09-29T09:43:24
2021-05-05T18:24:31
2020-09-29T09:44:06
The delimiter is set to the bell character as it is used nowhere is text files usually. However in the text dataset the delimiter was set to `\b` which is backspace in python, while the bell character is `\a`. I replace \b by \a Hopefully it fixes issues mentioned by some users in #622
lhoestq
https://github.com/huggingface/datasets/pull/683
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/683", "html_url": "https://github.com/huggingface/datasets/pull/683", "diff_url": "https://github.com/huggingface/datasets/pull/683.diff", "patch_url": "https://github.com/huggingface/datasets/pull/683.patch", "merged_at": null }
true
710,325,399
682
Update navbar chapter titles color
closed
[]
2020-09-28T14:35:17
2020-09-28T17:30:13
2020-09-28T17:30:12
Consistency with the color change that was done in transformers at https://github.com/huggingface/transformers/pull/7423 It makes the background-color of the chapter titles in the docs navbar darker, to differentiate them from the inner sections. see changes [here](https://691-250213286-gh.circle-artifacts.com/0/do...
lhoestq
https://github.com/huggingface/datasets/pull/682
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/682", "html_url": "https://github.com/huggingface/datasets/pull/682", "diff_url": "https://github.com/huggingface/datasets/pull/682.diff", "patch_url": "https://github.com/huggingface/datasets/pull/682.patch", "merged_at": "2020-09-28T17:30:12"...
true
710,075,721
681
Adding missing @property (+2 small flake8 fixes).
closed
[]
2020-09-28T08:53:53
2020-09-28T10:26:13
2020-09-28T10:26:09
Fixes #678
Narsil
https://github.com/huggingface/datasets/pull/681
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/681", "html_url": "https://github.com/huggingface/datasets/pull/681", "diff_url": "https://github.com/huggingface/datasets/pull/681.diff", "patch_url": "https://github.com/huggingface/datasets/pull/681.patch", "merged_at": "2020-09-28T10:26:09"...
true
710,066,138
680
Fix bug related to boolean in GAP dataset.
closed
[]
2020-09-28T08:39:39
2020-09-29T15:54:47
2020-09-29T15:54:47
### Why I did The value in `row["A-coref"]` and `row["B-coref"]` is `'TRUE'` or `'FALSE'`. This type is `string`, then `bool('FALSE')` is equal to `True` in Python. So, both rows are transformed into `True` now. So, I modified this problem. ### What I did I modified `bool(row["A-coref"])` and `bool(row["B-cor...
otakumesi
https://github.com/huggingface/datasets/pull/680
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/680", "html_url": "https://github.com/huggingface/datasets/pull/680", "diff_url": "https://github.com/huggingface/datasets/pull/680.diff", "patch_url": "https://github.com/huggingface/datasets/pull/680.patch", "merged_at": "2020-09-29T15:54:47"...
true
710,065,838
679
Fix negative ids when slicing with an array
closed
[]
2020-09-28T08:39:08
2020-09-28T14:42:20
2020-09-28T14:42:19
```python from datasets import Dataset d = ds.Dataset.from_dict({"a": range(10)}) print(d[[0, -1]]) # OverflowError ``` raises an error because of the negative id. This PR fixes that. Fix #668
lhoestq
https://github.com/huggingface/datasets/pull/679
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/679", "html_url": "https://github.com/huggingface/datasets/pull/679", "diff_url": "https://github.com/huggingface/datasets/pull/679.diff", "patch_url": "https://github.com/huggingface/datasets/pull/679.patch", "merged_at": "2020-09-28T14:42:19"...
true
710,060,497
678
The download instructions for c4 datasets are not contained in the error message
closed
[]
2020-09-28T08:30:54
2020-09-28T10:26:09
2020-09-28T10:26:09
The manual download instructions are not clear ```The dataset c4 with config en requires manual data. Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff...
Narsil
https://github.com/huggingface/datasets/issues/678
null
false
710,055,239
677
Move cache dir root creation in builder's init
closed
[]
2020-09-28T08:22:46
2020-09-28T14:42:43
2020-09-28T14:42:42
We use lock files in the builder initialization but sometimes the cache directory where they're supposed to be was not created. To fix that I moved the builder's cache dir root creation in the builder's init. Fix #671
lhoestq
https://github.com/huggingface/datasets/pull/677
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/677", "html_url": "https://github.com/huggingface/datasets/pull/677", "diff_url": "https://github.com/huggingface/datasets/pull/677.diff", "patch_url": "https://github.com/huggingface/datasets/pull/677.patch", "merged_at": "2020-09-28T14:42:42"...
true
710,014,319
676
train_test_split returns empty dataset item
closed
[]
2020-09-28T07:19:33
2020-10-07T13:46:33
2020-10-07T13:38:06
I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty. The codes: ``` yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp') print(yelp_data[0]) yelp_data = yelp_data.train_test_split(test_size=0.1) print(yelp_data) pri...
mojave-pku
https://github.com/huggingface/datasets/issues/676
null
false
709,818,725
675
Add custom dataset to NLP?
closed
[]
2020-09-27T21:22:50
2020-10-20T09:08:49
2020-10-20T09:08:49
Is it possible to add a custom dataset such as a .csv to the NLP library? Thanks.
timpal0l
https://github.com/huggingface/datasets/issues/675
null
false
709,661,006
674
load_dataset() won't download in Windows
closed
[]
2020-09-27T03:56:25
2020-10-05T08:28:18
2020-10-05T08:28:18
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've wa...
ThisDavehead
https://github.com/huggingface/datasets/issues/674
null
false
709,603,989
673
blog_authorship_corpus crashed
closed
[]
2020-09-26T20:15:28
2022-02-15T10:47:58
2022-02-15T10:47:58
This is just to report that When I pick blog_authorship_corpus in https://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus I get this: ![image](https://user-images.githubusercontent.com/7553188/94349542-4364f300-0013-11eb-897d-b25660a449f0.png)
Moshiii
https://github.com/huggingface/datasets/issues/673
null
false
709,575,527
672
Questions about XSUM
closed
[]
2020-09-26T17:16:24
2022-10-04T17:30:17
2022-10-04T17:30:17
Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu...
danyaljj
https://github.com/huggingface/datasets/issues/672
null
false
709,093,151
671
[BUG] No such file or directory
closed
[]
2020-09-25T16:38:54
2020-09-28T14:42:42
2020-09-28T14:42:42
This happens when both 1. Huggingface datasets cache dir does not exist 2. Try to load a local dataset script builder.py throws an error when trying to create a filelock in a directory (cache/datasets) that does not exist https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L177 Tested o...
jbragg
https://github.com/huggingface/datasets/issues/671
null
false
709,061,231
670
Fix SQuAD metric kwargs description
closed
[]
2020-09-25T16:08:57
2020-09-29T15:57:39
2020-09-29T15:57:38
The `answer_start` field was missing in the kwargs docstring. This should fix #657 FYI another fix was proposed by @tshrjn in #658 and suggests to remove this field. However IMO `answer_start` is useful to match the squad dataset format for consistency, even though it is not used in the metric computation. I th...
lhoestq
https://github.com/huggingface/datasets/pull/670
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/670", "html_url": "https://github.com/huggingface/datasets/pull/670", "diff_url": "https://github.com/huggingface/datasets/pull/670.diff", "patch_url": "https://github.com/huggingface/datasets/pull/670.patch", "merged_at": "2020-09-29T15:57:37"...
true
708,857,595
669
How to skip a example when running dataset.map
closed
[]
2020-09-25T11:17:53
2022-06-17T21:45:03
2020-10-05T16:28:13
in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map.
xixiaoyao
https://github.com/huggingface/datasets/issues/669
null
false
708,310,956
668
OverflowError when slicing with an array containing negative ids
closed
[]
2020-09-24T16:27:14
2020-09-28T14:42:19
2020-09-28T14:42:19
```python from datasets import Dataset d = ds.Dataset.from_dict({"a": range(10)}) print(d[0]) # {'a': 0} print(d[-1]) # {'a': 9} print(d[[0, -1]]) # OverflowError ``` results in ``` --------------------------------------------------------------------------- OverflowError ...
lhoestq
https://github.com/huggingface/datasets/issues/668
null
false
708,258,392
667
Loss not decrease with Datasets and Transformers
closed
[]
2020-09-24T15:14:43
2021-01-01T20:01:25
2021-01-01T20:01:25
HI, The following script is used to fine-tune a BertForSequenceClassification model on SST2. The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fine-tuning BertForQuestionAnswering using squad data...
wangcongcong123
https://github.com/huggingface/datasets/issues/667
null
false
707,608,578
666
Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT?
closed
[]
2020-09-23T19:02:25
2020-10-27T15:19:25
2020-10-27T15:19:25
wahab4114
https://github.com/huggingface/datasets/issues/666
null
false
707,037,738
665
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
closed
[]
2020-09-23T04:28:14
2020-10-08T09:32:16
2020-10-08T09:32:16
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [example['question'], example['context']] encodings = tokenizer.encode...
xixiaoyao
https://github.com/huggingface/datasets/issues/665
null
false
707,017,791
664
load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable
closed
[]
2020-09-23T03:53:36
2023-04-17T09:31:20
2020-10-20T09:06:13
version: 1.0.2 ``` train_dataset = datasets.load_dataset('squad') ``` The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors. ``` train_dataset = datasets.load_dataset('./my_squad.py') ...
xixiaoyao
https://github.com/huggingface/datasets/issues/664
null
false
706,732,636
663
Created dataset card snli.md
closed
[]
2020-09-22T22:29:37
2020-10-13T17:05:20
2020-10-12T20:26:52
First draft of a dataset card using the SNLI corpus as an example. This is mostly based on the [Google Doc draft](https://docs.google.com/document/d/1dKPGP-dA2W0QoTRGfqQ5eBp0CeSsTy7g2yM8RseHtos/edit), but I added a few sections and moved some things around. - I moved **Who Was Involved** to follow **Language**, ...
mcmillanmajora
https://github.com/huggingface/datasets/pull/663
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/663", "html_url": "https://github.com/huggingface/datasets/pull/663", "diff_url": "https://github.com/huggingface/datasets/pull/663.diff", "patch_url": "https://github.com/huggingface/datasets/pull/663.patch", "merged_at": "2020-10-12T20:26:52"...
true
706,689,866
662
Created dataset card snli.md
closed
[]
2020-09-22T21:00:17
2023-09-24T09:50:16
2020-09-22T21:26:21
First draft of a dataset card using the SNLI corpus as an example
mcmillanmajora
https://github.com/huggingface/datasets/pull/662
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/662", "html_url": "https://github.com/huggingface/datasets/pull/662", "diff_url": "https://github.com/huggingface/datasets/pull/662.diff", "patch_url": "https://github.com/huggingface/datasets/pull/662.patch", "merged_at": null }
true
706,465,936
661
Replace pa.OSFile by open
closed
[]
2020-09-22T15:05:59
2021-05-05T18:24:36
2020-09-22T15:15:25
It should fix #643
lhoestq
https://github.com/huggingface/datasets/pull/661
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/661", "html_url": "https://github.com/huggingface/datasets/pull/661", "diff_url": "https://github.com/huggingface/datasets/pull/661.diff", "patch_url": "https://github.com/huggingface/datasets/pull/661.patch", "merged_at": null }
true
706,324,032
660
add openwebtext
closed
[]
2020-09-22T12:05:22
2020-10-06T09:20:10
2020-09-28T09:07:26
This adds [The OpenWebText Corpus](https://skylion007.github.io/OpenWebTextCorpus/), which is a clean and large text corpus for nlp pretraining. It is an open source effort to reproduce OpenAI’s WebText dataset used by GPT-2, and it is also needed to reproduce ELECTRA. It solves #132 . ### Besides dataset buildin...
richarddwang
https://github.com/huggingface/datasets/pull/660
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/660", "html_url": "https://github.com/huggingface/datasets/pull/660", "diff_url": "https://github.com/huggingface/datasets/pull/660.diff", "patch_url": "https://github.com/huggingface/datasets/pull/660.patch", "merged_at": "2020-09-28T09:07:26"...
true
706,231,506
659
Keep new columns in transmit format
closed
[]
2020-09-22T09:47:23
2020-09-22T10:07:22
2020-09-22T10:07:20
When a dataset is formatted with a list of columns that `__getitem__` should return, then calling `map` to add new columns doesn't add the new columns to this list. It caused `KeyError` issues in #620 I changed the logic to add those new columns to the list that `__getitem__` should return.
lhoestq
https://github.com/huggingface/datasets/pull/659
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/659", "html_url": "https://github.com/huggingface/datasets/pull/659", "diff_url": "https://github.com/huggingface/datasets/pull/659.diff", "patch_url": "https://github.com/huggingface/datasets/pull/659.patch", "merged_at": "2020-09-22T10:07:20"...
true
706,206,247
658
Fix squad metric's Features
closed
[]
2020-09-22T09:09:52
2020-09-29T15:58:30
2020-09-29T15:58:30
Resolves issue [657](https://github.com/huggingface/datasets/issues/657).
tshrjn
https://github.com/huggingface/datasets/pull/658
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/658", "html_url": "https://github.com/huggingface/datasets/pull/658", "diff_url": "https://github.com/huggingface/datasets/pull/658.diff", "patch_url": "https://github.com/huggingface/datasets/pull/658.patch", "merged_at": null }
true
706,204,383
657
Squad Metric Description & Feature Mismatch
closed
[]
2020-09-22T09:07:00
2020-10-13T02:16:56
2020-09-29T15:57:38
The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation.
tshrjn
https://github.com/huggingface/datasets/issues/657
null
false
705,736,319
656
Use multiprocess from pathos for multiprocessing
closed
[]
2020-09-21T16:12:19
2020-09-28T14:45:40
2020-09-28T14:45:39
[Multiprocess](https://github.com/uqfoundation/multiprocess) (from the [pathos](https://github.com/uqfoundation/pathos) project) allows to use lambda functions in multiprocessed map. It was suggested to use it by @kandorm. We're already using dill which is its only dependency.
lhoestq
https://github.com/huggingface/datasets/pull/656
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/656", "html_url": "https://github.com/huggingface/datasets/pull/656", "diff_url": "https://github.com/huggingface/datasets/pull/656.diff", "patch_url": "https://github.com/huggingface/datasets/pull/656.patch", "merged_at": "2020-09-28T14:45:39"...
true
705,672,208
655
added Winogrande debiased subset
closed
[]
2020-09-21T14:51:08
2020-09-21T16:20:40
2020-09-21T16:16:04
The [Winogrande](https://arxiv.org/abs/1907.10641) paper mentions a `debiased` subset that wasn't in the first release; this PR adds it.
TevenLeScao
https://github.com/huggingface/datasets/pull/655
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/655", "html_url": "https://github.com/huggingface/datasets/pull/655", "diff_url": "https://github.com/huggingface/datasets/pull/655.diff", "patch_url": "https://github.com/huggingface/datasets/pull/655.patch", "merged_at": "2020-09-21T16:16:04"...
true
705,511,058
654
Allow empty inputs in metrics
closed
[]
2020-09-21T11:26:36
2020-10-06T03:51:48
2020-09-21T16:13:38
There was an arrow error when trying to compute a metric with empty inputs. The error was occurring when reading the arrow file, before calling metric._compute.
lhoestq
https://github.com/huggingface/datasets/pull/654
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/654", "html_url": "https://github.com/huggingface/datasets/pull/654", "diff_url": "https://github.com/huggingface/datasets/pull/654.diff", "patch_url": "https://github.com/huggingface/datasets/pull/654.patch", "merged_at": "2020-09-21T16:13:38"...
true
705,482,391
653
handle data alteration when trying type
closed
[]
2020-09-21T10:41:49
2020-09-21T16:13:06
2020-09-21T16:13:05
Fix #649 The bug came from the type inference that didn't handle a weird case in Pyarrow. Indeed this code runs without error but alters the data in arrow: ```python import pyarrow as pa type = pa.struct({"a": pa.struct({"b": pa.string()})}) array_with_altered_data = pa.array([{"a": {"b": "foo", "c": "bar"}}...
lhoestq
https://github.com/huggingface/datasets/pull/653
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/653", "html_url": "https://github.com/huggingface/datasets/pull/653", "diff_url": "https://github.com/huggingface/datasets/pull/653.diff", "patch_url": "https://github.com/huggingface/datasets/pull/653.patch", "merged_at": "2020-09-21T16:13:05"...
true
705,390,850
652
handle connection error in download_prepared_from_hf_gcs
closed
[]
2020-09-21T08:21:11
2020-09-21T08:28:43
2020-09-21T08:28:42
Fix #647
lhoestq
https://github.com/huggingface/datasets/pull/652
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/652", "html_url": "https://github.com/huggingface/datasets/pull/652", "diff_url": "https://github.com/huggingface/datasets/pull/652.diff", "patch_url": "https://github.com/huggingface/datasets/pull/652.patch", "merged_at": "2020-09-21T08:28:42"...
true
705,212,034
651
Problem with JSON dataset format
open
[]
2020-09-20T23:57:14
2020-09-21T12:14:24
null
I have a local json dataset with the following form. { 'id01234': {'key1': value1, 'key2': value2, 'key3': value3}, 'id01235': {'key1': value1, 'key2': value2, 'key3': value3}, . . . 'id09999': {'key1': value1, 'key2': value2, 'key3': value3} } Note that instead of a list of records i...
vikigenius
https://github.com/huggingface/datasets/issues/651
null
false
704,861,844
650
dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`
closed
[]
2020-09-19T11:07:03
2020-09-22T11:54:10
2020-09-22T11:54:09
Hi, I recently want to add a dataset whose source data is like this ``` openwebtext.tar.xz |__ openwebtext |__subset000.xz | |__ ....txt | |__ ....txt | ... |__ subset001.xz | .... ``` So I wrote `openwebtext.py` like this ``` d...
richarddwang
https://github.com/huggingface/datasets/issues/650
null
false
704,838,415
649
Inconsistent behavior in map
closed
[]
2020-09-19T08:41:12
2020-09-21T16:13:05
2020-09-21T16:13:05
I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem. ```python import datasets # Dataset with a single feature called 'field' consisting of two examples d...
krandiash
https://github.com/huggingface/datasets/issues/649
null
false
704,753,123
648
offset overflow when multiprocessing batched map on large datasets.
closed
[]
2020-09-19T02:15:11
2025-06-17T12:56:07
2020-09-19T16:46:31
It only happened when "multiprocessing" + "batched" + "large dataset" at the same time. ``` def bprocess(examples): examples['len'] = [] for text in examples['text']: examples['len'].append(len(text)) return examples wiki.map(brpocess, batched=True, num_proc=8) ``` ``` ----------------------------...
richarddwang
https://github.com/huggingface/datasets/issues/648
null
false
704,734,764
647
Cannot download dataset_info.json
closed
[]
2020-09-19T01:35:15
2020-09-21T08:28:42
2020-09-21T08:28:42
I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text...
chiyuzhang94
https://github.com/huggingface/datasets/issues/647
null
false
704,607,371
646
Fix docs typos
closed
[]
2020-09-18T19:32:27
2020-09-21T16:30:54
2020-09-21T16:14:12
This PR fixes few typos in the docs and the error in the code snippet in the set_format section in docs/source/torch_tensorflow.rst. `torch.utils.data.Dataloader` expects padded batches so it throws an error due to not being able to stack the unpadded tensors. If we follow the Quick tour from the docs where they add th...
mariosasko
https://github.com/huggingface/datasets/pull/646
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/646", "html_url": "https://github.com/huggingface/datasets/pull/646", "diff_url": "https://github.com/huggingface/datasets/pull/646.diff", "patch_url": "https://github.com/huggingface/datasets/pull/646.patch", "merged_at": "2020-09-21T16:14:12"...
true
704,542,234
645
Don't use take on dataset table in pyarrow 1.0.x
closed
[]
2020-09-18T17:31:34
2023-09-19T07:59:19
2020-09-19T16:46:31
Fix #615
lhoestq
https://github.com/huggingface/datasets/pull/645
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/645", "html_url": "https://github.com/huggingface/datasets/pull/645", "diff_url": "https://github.com/huggingface/datasets/pull/645.diff", "patch_url": "https://github.com/huggingface/datasets/pull/645.patch", "merged_at": "2020-09-19T16:46:31"...
true
704,534,501
644
Better windows support
closed
[]
2020-09-18T17:17:36
2020-09-25T14:02:30
2020-09-25T14:02:28
There are a few differences in the behavior of python and pyarrow on windows. For example there are restrictions when accessing/deleting files that are open Fix #590
lhoestq
https://github.com/huggingface/datasets/pull/644
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/644", "html_url": "https://github.com/huggingface/datasets/pull/644", "diff_url": "https://github.com/huggingface/datasets/pull/644.diff", "patch_url": "https://github.com/huggingface/datasets/pull/644.patch", "merged_at": "2020-09-25T14:02:28"...
true
704,477,164
643
Caching processed dataset at wrong folder
closed
[]
2020-09-18T15:41:26
2022-02-16T14:53:29
2022-02-16T14:53:29
Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_length') dataset = ...
mrm8488
https://github.com/huggingface/datasets/issues/643
null
false
704,397,499
642
Rename wnut fields
closed
[]
2020-09-18T13:51:31
2020-09-18T17:18:31
2020-09-18T17:18:30
As mentioned in #641 it would be cool to have it follow the naming of the other NER datasets
lhoestq
https://github.com/huggingface/datasets/pull/642
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/642", "html_url": "https://github.com/huggingface/datasets/pull/642", "diff_url": "https://github.com/huggingface/datasets/pull/642.diff", "patch_url": "https://github.com/huggingface/datasets/pull/642.patch", "merged_at": "2020-09-18T17:18:30"...
true
704,373,940
641
Add Polyglot-NER Dataset
closed
[]
2020-09-18T13:21:44
2020-09-20T03:04:43
2020-09-20T03:04:43
Adds the [Polyglot-NER dataset](https://sites.google.com/site/rmyeid/projects/polylgot-ner) with named entity tags for 40 languages. I include separate configs for each language as well as a `combined` config which lumps them all together.
joeddav
https://github.com/huggingface/datasets/pull/641
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/641", "html_url": "https://github.com/huggingface/datasets/pull/641", "diff_url": "https://github.com/huggingface/datasets/pull/641.diff", "patch_url": "https://github.com/huggingface/datasets/pull/641.patch", "merged_at": "2020-09-20T03:04:43"...
true
704,311,758
640
Make shuffle compatible with temp_seed
closed
[]
2020-09-18T11:38:58
2020-09-18T11:47:51
2020-09-18T11:47:50
This code used to return different dataset at each run ```python import dataset as ds dataset = ... with ds.temp_seed(42): shuffled = dataset.shuffle() ``` Now it returns the same one since the seed is set
lhoestq
https://github.com/huggingface/datasets/pull/640
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/640", "html_url": "https://github.com/huggingface/datasets/pull/640", "diff_url": "https://github.com/huggingface/datasets/pull/640.diff", "patch_url": "https://github.com/huggingface/datasets/pull/640.patch", "merged_at": "2020-09-18T11:47:50"...
true
704,217,963
639
Update glue QQP checksum
closed
[]
2020-09-18T09:08:15
2020-09-18T11:37:08
2020-09-18T11:37:07
Fix #638
lhoestq
https://github.com/huggingface/datasets/pull/639
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/639", "html_url": "https://github.com/huggingface/datasets/pull/639", "diff_url": "https://github.com/huggingface/datasets/pull/639.diff", "patch_url": "https://github.com/huggingface/datasets/pull/639.patch", "merged_at": "2020-09-18T11:37:07"...
true
704,146,956
638
GLUE/QQP dataset: NonMatchingChecksumError
closed
[]
2020-09-18T07:09:10
2020-09-18T11:37:07
2020-09-18T11:37:07
Hi @lhoestq , I know you are busy and there are also other important issues. But if this is easy to be fixed, I am shamelessly wondering if you can give me some help , so I can evaluate my models and restart with my developing cycle asap. 😚 datasets version: editable install of master at 9/17 `datasets.load_data...
richarddwang
https://github.com/huggingface/datasets/issues/638
null
false
703,539,909
637
Add MATINF
closed
[]
2020-09-17T12:24:53
2020-09-17T13:23:18
2020-09-17T13:23:17
JetRunner
https://github.com/huggingface/datasets/pull/637
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/637", "html_url": "https://github.com/huggingface/datasets/pull/637", "diff_url": "https://github.com/huggingface/datasets/pull/637.diff", "patch_url": "https://github.com/huggingface/datasets/pull/637.patch", "merged_at": "2020-09-17T13:23:17"...
true
702,883,989
636
Consistent ner features
closed
[]
2020-09-16T15:56:25
2020-09-17T09:52:59
2020-09-17T09:52:58
As discussed in #613 , this PR aims at making NER feature names consistent across datasets. I changed the feature names of LinCE and XTREME/PAN-X
lhoestq
https://github.com/huggingface/datasets/pull/636
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/636", "html_url": "https://github.com/huggingface/datasets/pull/636", "diff_url": "https://github.com/huggingface/datasets/pull/636.diff", "patch_url": "https://github.com/huggingface/datasets/pull/636.patch", "merged_at": "2020-09-17T09:52:58"...
true
702,822,439
635
Loglevel
closed
[]
2020-09-16T14:37:53
2020-09-17T09:52:19
2020-09-17T09:52:18
Continuation of #618
lhoestq
https://github.com/huggingface/datasets/pull/635
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/635", "html_url": "https://github.com/huggingface/datasets/pull/635", "diff_url": "https://github.com/huggingface/datasets/pull/635.diff", "patch_url": "https://github.com/huggingface/datasets/pull/635.patch", "merged_at": "2020-09-17T09:52:18"...
true
702,676,041
634
Add ConLL-2000 dataset
closed
[]
2020-09-16T11:14:11
2020-09-17T10:38:10
2020-09-17T10:38:10
Adds ConLL-2000 dataset used for text chunking. See https://www.clips.uantwerpen.be/conll2000/chunking/ for details and [motivation](https://github.com/huggingface/transformers/pull/7041#issuecomment-692710948) behind this PR
vblagoje
https://github.com/huggingface/datasets/pull/634
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/634", "html_url": "https://github.com/huggingface/datasets/pull/634", "diff_url": "https://github.com/huggingface/datasets/pull/634.diff", "patch_url": "https://github.com/huggingface/datasets/pull/634.patch", "merged_at": "2020-09-17T10:38:10"...
true
702,440,484
633
Load large text file for LM pre-training resulting in OOM
open
[]
2020-09-16T04:33:15
2021-02-16T12:02:01
null
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
leethu2012
https://github.com/huggingface/datasets/issues/633
null
false
702,358,124
632
Fix typos in the loading datasets docs
closed
[]
2020-09-16T00:27:41
2020-09-21T16:31:11
2020-09-16T06:52:44
This PR fixes two typos in the loading datasets docs, one of them being a broken link to the `load_dataset` function.
mariosasko
https://github.com/huggingface/datasets/pull/632
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/632", "html_url": "https://github.com/huggingface/datasets/pull/632", "diff_url": "https://github.com/huggingface/datasets/pull/632.diff", "patch_url": "https://github.com/huggingface/datasets/pull/632.patch", "merged_at": "2020-09-16T06:52:44"...
true
701,711,255
631
Fix text delimiter
closed
[]
2020-09-15T08:08:42
2020-09-22T15:03:06
2020-09-15T08:26:25
I changed the delimiter in the `text` dataset script. It should fix the `pyarrow.lib.ArrowInvalid: CSV parse error` from #622 I changed the delimiter to an unused ascii character that is not present in text files : `\b`
lhoestq
https://github.com/huggingface/datasets/pull/631
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/631", "html_url": "https://github.com/huggingface/datasets/pull/631", "diff_url": "https://github.com/huggingface/datasets/pull/631.diff", "patch_url": "https://github.com/huggingface/datasets/pull/631.patch", "merged_at": "2020-09-15T08:26:25"...
true
701,636,350
630
Text dataset not working with large files
closed
[]
2020-09-15T06:02:36
2020-09-25T22:21:43
2020-09-25T22:21:43
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
ksjae
https://github.com/huggingface/datasets/issues/630
null
false
701,517,550
629
straddling object straddles two block boundaries
closed
[]
2020-09-15T00:30:46
2020-09-15T00:36:17
2020-09-15T00:32:17
I am trying to read json data (it's an array with lots of dictionaries) and getting block boundaries issue as below : I tried calling read_json with readOptions but no luck . ``` table = json.read_json(fn) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "pyarrow/_json.pyx", li...
bharaniabhishek123
https://github.com/huggingface/datasets/issues/629
null
false
701,496,053
628
Update docs links in the contribution guideline
closed
[]
2020-09-14T23:27:19
2020-11-02T21:03:23
2020-09-15T06:19:35
Fixed the `add a dataset` and `share a dataset` links in the contribution guideline to refer to the new docs website.
M-Salti
https://github.com/huggingface/datasets/pull/628
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/628", "html_url": "https://github.com/huggingface/datasets/pull/628", "diff_url": "https://github.com/huggingface/datasets/pull/628.diff", "patch_url": "https://github.com/huggingface/datasets/pull/628.patch", "merged_at": "2020-09-15T06:19:35"...
true
701,411,661
627
fix (#619) MLQA features names
closed
[]
2020-09-14T20:41:59
2020-11-02T21:04:32
2020-09-16T06:54:11
Fixed the features names as suggested in (#619) in the `_generate_examples` and `_info` methods in the MLQA loading script and also changed the names in the `dataset_infos.json` file.
M-Salti
https://github.com/huggingface/datasets/pull/627
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/627", "html_url": "https://github.com/huggingface/datasets/pull/627", "diff_url": "https://github.com/huggingface/datasets/pull/627.diff", "patch_url": "https://github.com/huggingface/datasets/pull/627.patch", "merged_at": "2020-09-16T06:54:11"...
true
701,352,605
626
Update GLUE URLs (now hosted on FB)
closed
[]
2020-09-14T19:05:39
2020-09-16T06:53:18
2020-09-16T06:53:18
NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112. Note: rebased on huggingface/dat...
jeswan
https://github.com/huggingface/datasets/pull/626
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/626", "html_url": "https://github.com/huggingface/datasets/pull/626", "diff_url": "https://github.com/huggingface/datasets/pull/626.diff", "patch_url": "https://github.com/huggingface/datasets/pull/626.patch", "merged_at": "2020-09-16T06:53:18"...
true
701,057,799
625
dtype of tensors should be preserved
closed
[]
2020-09-14T12:38:05
2021-08-17T08:30:04
2021-08-17T08:30:04
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-...
BramVanroy
https://github.com/huggingface/datasets/issues/625
null
false
700,541,628
624
Add learningq dataset
open
[]
2020-09-13T10:20:27
2020-09-14T09:50:02
null
Hi, Thank you again for this amazing repo. Would it be possible for y'all to add the LearningQ dataset - https://github.com/AngusGLChen/LearningQ ?
krrishdholakia
https://github.com/huggingface/datasets/issues/624
null
false