id int64 599M 3.48B | number int64 1 7.8k | title stringlengths 1 290 | state stringclasses 2
values | comments listlengths 0 30 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-10-05 06:37:50 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-10-05 10:32:43 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-10-01 13:56:03 ⌀ | body stringlengths 0 228k ⌀ | user stringlengths 3 26 | html_url stringlengths 46 51 | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
700,235,308 | 623 | Custom feature types in `load_dataset` from CSV | closed | [] | 2020-09-12T13:21:34 | 2020-09-30T19:51:43 | 2020-09-30T08:39:54 | I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`.
I am working with the local files from the emotion dataset. To get the data you can use the followi... | lvwerra | https://github.com/huggingface/datasets/issues/623 | null | false |
700,225,826 | 622 | load_dataset for text files not working | closed | [] | 2020-09-12T12:49:28 | 2020-10-28T11:07:31 | 2020-10-28T11:07:30 | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | BramVanroy | https://github.com/huggingface/datasets/issues/622 | null | false |
700,171,097 | 621 | [docs] Index: The native emoji looks kinda ugly in large size | closed | [] | 2020-09-12T09:48:40 | 2020-09-15T06:20:03 | 2020-09-15T06:20:02 | julien-c | https://github.com/huggingface/datasets/pull/621 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/621",
"html_url": "https://github.com/huggingface/datasets/pull/621",
"diff_url": "https://github.com/huggingface/datasets/pull/621.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/621.patch",
"merged_at": "2020-09-15T06:20:02"... | true | |
699,815,135 | 620 | map/filter multiprocessing raises errors and corrupts datasets | closed | [] | 2020-09-11T22:30:06 | 2020-10-08T16:31:47 | 2020-10-08T16:31:46 | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | timothyjlaurent | https://github.com/huggingface/datasets/issues/620 | null | false |
699,733,612 | 619 | Mistakes in MLQA features names | closed | [] | 2020-09-11T20:46:23 | 2020-09-16T06:59:19 | 2020-09-16T06:59:19 | I think the following features in MLQA shouldn't be named the way they are:
1. `questions` (should be `question`)
2. `ids` (should be `id`)
3. `start` (should be `answer_start`)
The reasons I'm suggesting these features be renamed are:
* To make them consistent with other QA datasets like SQuAD, XQuAD, TyDiQA et... | M-Salti | https://github.com/huggingface/datasets/issues/619 | null | false |
699,684,831 | 618 | sync logging utils with transformers | closed | [] | 2020-09-11T19:46:13 | 2020-09-17T15:40:59 | 2020-09-17T09:53:47 | sync the docs/code with the recent changes in transformers' `logging` utils:
1. change the default level to `WARNING`
2. add `DATASETS_VERBOSITY` env var
3. expand docs | stas00 | https://github.com/huggingface/datasets/pull/618 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/618",
"html_url": "https://github.com/huggingface/datasets/pull/618",
"diff_url": "https://github.com/huggingface/datasets/pull/618.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/618.patch",
"merged_at": null
} | true |
699,472,596 | 617 | Compare different Rouge implementations | closed | [] | 2020-09-11T15:49:32 | 2023-03-22T12:08:44 | 2020-10-02T09:52:18 | I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example.
Ca... | ibeltagy | https://github.com/huggingface/datasets/issues/617 | null | false |
699,462,293 | 616 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors | open | [] | 2020-09-11T15:39:16 | 2021-07-22T21:12:21 | null | I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra... | BramVanroy | https://github.com/huggingface/datasets/issues/616 | null | false |
699,410,773 | 615 | Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0 | closed | [] | 2020-09-11T14:50:38 | 2024-05-02T06:53:15 | 2020-09-19T16:46:31 | How to reproduce:
```python
from datasets import load_dataset
wiki = load_dataset("wikipedia", "20200501.en", split="train")
wiki[[0]]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-13-38... | lhoestq | https://github.com/huggingface/datasets/issues/615 | null | false |
699,177,110 | 614 | [doc] Update deploy.sh | closed | [] | 2020-09-11T11:06:13 | 2020-09-14T08:49:19 | 2020-09-14T08:49:17 | thomwolf | https://github.com/huggingface/datasets/pull/614 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/614",
"html_url": "https://github.com/huggingface/datasets/pull/614",
"diff_url": "https://github.com/huggingface/datasets/pull/614.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/614.patch",
"merged_at": "2020-09-14T08:49:17"... | true | |
699,117,070 | 613 | Add CoNLL-2003 shared task dataset | closed | [] | 2020-09-11T10:02:30 | 2020-10-05T10:43:05 | 2020-09-17T10:36:38 | Please consider adding CoNLL-2003 shared task dataset as it's beneficial for token classification tasks. The motivation behind this PR is the [PR](https://github.com/huggingface/transformers/pull/7041) in the transformers project. This dataset would be not only useful for the usual run-of-the-mill NER tasks but also fo... | vblagoje | https://github.com/huggingface/datasets/pull/613 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/613",
"html_url": "https://github.com/huggingface/datasets/pull/613",
"diff_url": "https://github.com/huggingface/datasets/pull/613.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/613.patch",
"merged_at": "2020-09-17T10:36:38"... | true |
699,008,644 | 612 | add multi-proc to dataset dict | closed | [] | 2020-09-11T08:18:13 | 2020-09-11T10:20:13 | 2020-09-11T10:20:11 | Add multi-proc to `DatasetDict` | thomwolf | https://github.com/huggingface/datasets/pull/612 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/612",
"html_url": "https://github.com/huggingface/datasets/pull/612",
"diff_url": "https://github.com/huggingface/datasets/pull/612.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/612.patch",
"merged_at": "2020-09-11T10:20:11"... | true |
698,863,988 | 611 | ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 | closed | [] | 2020-09-11T05:29:12 | 2022-06-01T15:11:43 | 2022-06-01T15:11:43 | Hi, I'm trying to load a dataset from Dataframe, but I get the error:
```bash
---------------------------------------------------------------------------
ArrowCapacityError Traceback (most recent call last)
<ipython-input-7-146b6b495963> in <module>
----> 1 dataset = Dataset.from_pandas(emb)... | sangyx | https://github.com/huggingface/datasets/issues/611 | null | false |
698,349,388 | 610 | Load text file for RoBERTa pre-training. | closed | [] | 2020-09-10T18:41:38 | 2022-11-22T13:51:24 | 2022-11-22T13:51:23 | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | chiyuzhang94 | https://github.com/huggingface/datasets/issues/610 | null | false |
698,323,989 | 609 | Update GLUE URLs (now hosted on FB) | closed | [] | 2020-09-10T18:16:32 | 2020-09-14T19:06:02 | 2020-09-14T19:06:01 | NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112. | jeswan | https://github.com/huggingface/datasets/pull/609 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/609",
"html_url": "https://github.com/huggingface/datasets/pull/609",
"diff_url": "https://github.com/huggingface/datasets/pull/609.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/609.patch",
"merged_at": null
} | true |
698,291,156 | 608 | Don't use the old NYU GLUE dataset URLs | closed | [] | 2020-09-10T17:47:02 | 2020-09-16T06:53:18 | 2020-09-16T06:53:18 | NYU is switching dataset hosting from Google to FB. Initial changes to `datasets` are in https://github.com/jeswan/nlp/commit/b7d4a071d432592ded971e30ef73330529de25ce. What tests do you suggest I run before opening a PR?
See: https://github.com/jiant-dev/jiant/issues/161 and https://github.com/nyu-mll/jiant/pull/111... | jeswan | https://github.com/huggingface/datasets/issues/608 | null | false |
698,094,442 | 607 | Add transmit_format wrapper and tests | closed | [] | 2020-09-10T15:03:50 | 2020-09-10T15:21:48 | 2020-09-10T15:21:47 | Same as #605 but using a decorator on-top of dataset transforms that are not in place | lhoestq | https://github.com/huggingface/datasets/pull/607 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/607",
"html_url": "https://github.com/huggingface/datasets/pull/607",
"diff_url": "https://github.com/huggingface/datasets/pull/607.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/607.patch",
"merged_at": "2020-09-10T15:21:47"... | true |
698,050,442 | 606 | Quick fix :) | closed | [] | 2020-09-10T14:32:06 | 2020-09-10T16:18:32 | 2020-09-10T16:18:30 | `nlp` => `datasets` | thomwolf | https://github.com/huggingface/datasets/pull/606 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/606",
"html_url": "https://github.com/huggingface/datasets/pull/606",
"diff_url": "https://github.com/huggingface/datasets/pull/606.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/606.patch",
"merged_at": "2020-09-10T16:18:30"... | true |
697,887,401 | 605 | [Datasets] Transmit format to children | closed | [] | 2020-09-10T12:30:18 | 2023-09-24T09:49:47 | 2020-09-10T16:15:21 | Transmit format to children obtained when processing a dataset.
Added a test.
When concatenating datasets, if the formats are disparate, the concatenated dataset has a format reset to defaults. | thomwolf | https://github.com/huggingface/datasets/pull/605 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/605",
"html_url": "https://github.com/huggingface/datasets/pull/605",
"diff_url": "https://github.com/huggingface/datasets/pull/605.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/605.patch",
"merged_at": null
} | true |
697,774,581 | 604 | Update bucket prefix | closed | [] | 2020-09-10T11:01:13 | 2020-09-10T12:45:33 | 2020-09-10T12:45:32 | cc @julien-c | lhoestq | https://github.com/huggingface/datasets/pull/604 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/604",
"html_url": "https://github.com/huggingface/datasets/pull/604",
"diff_url": "https://github.com/huggingface/datasets/pull/604.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/604.patch",
"merged_at": "2020-09-10T12:45:32"... | true |
697,758,750 | 603 | Set scripts version to master | closed | [] | 2020-09-10T10:47:44 | 2020-09-10T11:02:05 | 2020-09-10T11:02:04 | By default the scripts version is master, so that if the library is installed with
```
pip install git+http://github.com/huggingface/nlp.git
```
or
```
git clone http://github.com/huggingface/nlp.git
pip install -e ./nlp
```
will use the latest scripts, and not the ones from the previous version. | lhoestq | https://github.com/huggingface/datasets/pull/603 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/603",
"html_url": "https://github.com/huggingface/datasets/pull/603",
"diff_url": "https://github.com/huggingface/datasets/pull/603.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/603.patch",
"merged_at": "2020-09-10T11:02:04"... | true |
697,636,605 | 602 | apply offset to indices in multiprocessed map | closed | [] | 2020-09-10T08:54:30 | 2020-09-10T11:03:39 | 2020-09-10T11:03:37 | Fix #597
I fixed the indices by applying an offset.
I added the case to our tests to make sure it doesn't happen again.
I also added the message proposed by @thomwolf in #597
```python
>>> d.select(range(10)).map(fn, with_indices=True, batched=True, num_proc=2, load_from_cache_file=False)
Done writing 10 ... | lhoestq | https://github.com/huggingface/datasets/pull/602 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/602",
"html_url": "https://github.com/huggingface/datasets/pull/602",
"diff_url": "https://github.com/huggingface/datasets/pull/602.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/602.patch",
"merged_at": "2020-09-10T11:03:37"... | true |
697,574,848 | 601 | check if trasnformers has PreTrainedTokenizerBase | closed | [] | 2020-09-10T07:54:56 | 2020-09-10T11:01:37 | 2020-09-10T11:01:36 | Fix #598 | lhoestq | https://github.com/huggingface/datasets/pull/601 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/601",
"html_url": "https://github.com/huggingface/datasets/pull/601",
"diff_url": "https://github.com/huggingface/datasets/pull/601.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/601.patch",
"merged_at": "2020-09-10T11:01:36"... | true |
697,496,913 | 600 | Pickling error when loading dataset | closed | [] | 2020-09-10T06:28:08 | 2020-09-25T14:31:54 | 2020-09-25T14:31:54 | Hi,
I modified line 136 in the original [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) as:
```
# line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)
dataset = load_da... | kandorm | https://github.com/huggingface/datasets/issues/600 | null | false |
697,377,786 | 599 | Add MATINF dataset | closed | [] | 2020-09-10T03:31:09 | 2023-09-24T09:50:08 | 2020-09-17T12:17:25 | @lhoestq The command to create metadata failed. I guess it's because the zip is not downloaded from a remote address? How to solve that? Also the CI fails and I don't know how to fix that :( | JetRunner | https://github.com/huggingface/datasets/pull/599 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/599",
"html_url": "https://github.com/huggingface/datasets/pull/599",
"diff_url": "https://github.com/huggingface/datasets/pull/599.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/599.patch",
"merged_at": null
} | true |
697,156,501 | 598 | The current version of the package on github has an error when loading dataset | closed | [] | 2020-09-09T21:03:23 | 2020-09-10T06:25:21 | 2020-09-09T22:57:28 | Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine):
To recreate the error:
First, installing nlp directly from source:
```
git clone https://github.com/huggingface/nlp.git
cd nlp
pip install -e .
``... | zeyuyun1 | https://github.com/huggingface/datasets/issues/598 | null | false |
697,112,029 | 597 | Indices incorrect with multiprocessing | closed | [] | 2020-09-09T19:50:56 | 2020-09-10T11:03:37 | 2020-09-10T11:03:37 | When `num_proc` > 1, the indices argument passed to the map function is incorrect:
```python
d = load_dataset('imdb', split='test[:1%]')
def fn(x, inds):
print(inds)
return x
d.select(range(10)).map(fn, with_indices=True, batched=True)
# [0, 1]
# [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
d.select(range(10... | joeddav | https://github.com/huggingface/datasets/issues/597 | null | false |
696,928,139 | 596 | [style/quality] Moving to isort 5.0.0 + style/quality on datasets and metrics | closed | [] | 2020-09-09T15:47:21 | 2020-09-10T10:05:04 | 2020-09-10T10:05:03 | Move the repo to isort 5.0.0.
Also start testing style/quality on datasets and metrics.
Specific rule: we allow F401 (unused imports) in metrics to be able to add imports to detect early on missing dependencies.
Maybe we could add this in datasets but while cleaning this I've seen many example of really unused i... | thomwolf | https://github.com/huggingface/datasets/pull/596 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/596",
"html_url": "https://github.com/huggingface/datasets/pull/596",
"diff_url": "https://github.com/huggingface/datasets/pull/596.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/596.patch",
"merged_at": "2020-09-10T10:05:03"... | true |
696,892,304 | 595 | `Dataset`/`DatasetDict` has no attribute 'save_to_disk' | closed | [] | 2020-09-09T15:01:52 | 2020-09-09T16:20:19 | 2020-09-09T16:20:18 | Hi,
As the title indicates, both `Dataset` and `DatasetDict` classes don't seem to have the `save_to_disk` method. While the file [`arrow_dataset.py`](https://github.com/huggingface/nlp/blob/34bf0b03bfe03e7f77b8fec1cd48f5452c4fc7c1/src/nlp/arrow_dataset.py) in the repo here has the method, the file `arrow_dataset.p... | sudarshan85 | https://github.com/huggingface/datasets/issues/595 | null | false |
696,816,893 | 594 | Fix germeval url | closed | [] | 2020-09-09T13:29:35 | 2020-09-09T13:34:35 | 2020-09-09T13:34:34 | Continuation of #593 but without the dummy data hack | lhoestq | https://github.com/huggingface/datasets/pull/594 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/594",
"html_url": "https://github.com/huggingface/datasets/pull/594",
"diff_url": "https://github.com/huggingface/datasets/pull/594.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/594.patch",
"merged_at": "2020-09-09T13:34:34"... | true |
696,679,182 | 593 | GermEval 2014: new download urls | closed | [] | 2020-09-09T10:07:29 | 2020-09-09T14:16:54 | 2020-09-09T13:35:15 | Hi,
unfortunately, the download links for the GermEval 2014 dataset have changed: they're now located on a Google Drive.
I changed the URLs and bump version from 1.0.0 to 2.0.0. | stefan-it | https://github.com/huggingface/datasets/pull/593 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/593",
"html_url": "https://github.com/huggingface/datasets/pull/593",
"diff_url": "https://github.com/huggingface/datasets/pull/593.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/593.patch",
"merged_at": null
} | true |
696,619,986 | 592 | Test in memory and on disk | closed | [] | 2020-09-09T08:59:30 | 2020-09-09T13:50:04 | 2020-09-09T13:50:03 | I added test parameters to do every test both in memory and on disk.
I also found a bug in concatenate_dataset thanks to the new tests and fixed it. | lhoestq | https://github.com/huggingface/datasets/pull/592 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/592",
"html_url": "https://github.com/huggingface/datasets/pull/592",
"diff_url": "https://github.com/huggingface/datasets/pull/592.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/592.patch",
"merged_at": "2020-09-09T13:50:03"... | true |
696,530,413 | 591 | fix #589 (backward compat) | closed | [] | 2020-09-09T07:33:13 | 2020-09-09T08:57:56 | 2020-09-09T08:57:55 | Fix #589 | thomwolf | https://github.com/huggingface/datasets/pull/591 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/591",
"html_url": "https://github.com/huggingface/datasets/pull/591",
"diff_url": "https://github.com/huggingface/datasets/pull/591.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/591.patch",
"merged_at": "2020-09-09T08:57:54"... | true |
696,501,827 | 590 | The process cannot access the file because it is being used by another process (windows) | closed | [] | 2020-09-09T07:01:36 | 2020-09-25T14:02:28 | 2020-09-25T14:02:28 | Hi, I consistently get the following error when developing in my PC (windows 10):
```
train_dataset = train_dataset.map(convert_to_features, batched=True)
File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\site-packages\nlp\arrow_dataset.py", line 970, in map
shutil.move(tmp_file.... | saareliad | https://github.com/huggingface/datasets/issues/590 | null | false |
696,488,447 | 589 | Cannot use nlp.load_dataset text, AttributeError: module 'nlp.utils' has no attribute 'logging' | closed | [] | 2020-09-09T06:46:53 | 2020-09-09T08:57:54 | 2020-09-09T08:57:54 |
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/load.py", line 533, in load_dataset
builder_cls = import_main_class(module_path, dataset=True)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp... | ksjae | https://github.com/huggingface/datasets/issues/589 | null | false |
695,249,809 | 588 | Support pathlike obj in load dataset | closed | [] | 2020-09-07T16:13:21 | 2020-09-08T07:45:19 | 2020-09-08T07:45:18 | Fix #582
(I recreated the PR, I got an issue with git) | lhoestq | https://github.com/huggingface/datasets/pull/588 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/588",
"html_url": "https://github.com/huggingface/datasets/pull/588",
"diff_url": "https://github.com/huggingface/datasets/pull/588.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/588.patch",
"merged_at": "2020-09-08T07:45:17"... | true |
695,246,018 | 587 | Support pathlike obj in load dataset | closed | [] | 2020-09-07T16:09:16 | 2020-09-07T16:10:35 | 2020-09-07T16:10:35 | Fix #582 | lhoestq | https://github.com/huggingface/datasets/pull/587 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/587",
"html_url": "https://github.com/huggingface/datasets/pull/587",
"diff_url": "https://github.com/huggingface/datasets/pull/587.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/587.patch",
"merged_at": null
} | true |
695,237,999 | 586 | Better message when data files is empty | closed | [] | 2020-09-07T15:59:57 | 2020-09-09T09:00:09 | 2020-09-09T09:00:08 | Fix #581 | lhoestq | https://github.com/huggingface/datasets/pull/586 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/586",
"html_url": "https://github.com/huggingface/datasets/pull/586",
"diff_url": "https://github.com/huggingface/datasets/pull/586.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/586.patch",
"merged_at": "2020-09-09T09:00:07"... | true |
695,191,209 | 585 | Fix select for pyarrow < 1.0.0 | closed | [] | 2020-09-07T15:02:52 | 2020-09-08T07:43:17 | 2020-09-08T07:43:15 | Fix #583 | lhoestq | https://github.com/huggingface/datasets/pull/585 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/585",
"html_url": "https://github.com/huggingface/datasets/pull/585",
"diff_url": "https://github.com/huggingface/datasets/pull/585.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/585.patch",
"merged_at": "2020-09-08T07:43:15"... | true |
695,186,652 | 584 | Use github versioning | closed | [] | 2020-09-07T14:58:15 | 2020-09-09T13:37:35 | 2020-09-09T13:37:34 | Right now dataset scripts and metrics are downloaded from S3 which is in sync with master. It means that it's not currently possible to pin the dataset/metric script version.
To fix that I changed the download url from S3 to github, and adding a `version` parameter in `load_dataset` and `load_metric` to pin a certai... | lhoestq | https://github.com/huggingface/datasets/pull/584 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/584",
"html_url": "https://github.com/huggingface/datasets/pull/584",
"diff_url": "https://github.com/huggingface/datasets/pull/584.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/584.patch",
"merged_at": "2020-09-09T13:37:34"... | true |
695,166,265 | 583 | ArrowIndexError on Dataset.select | closed | [] | 2020-09-07T14:36:29 | 2020-09-08T07:43:15 | 2020-09-08T07:43:15 | If the indices table consists in several chunks, then `dataset.select` results in an `ArrowIndexError` error for pyarrow < 1.0.0
Example:
```python
from nlp import load_dataset
mnli = load_dataset("glue", "mnli", split="train")
shuffled = mnli.shuffle(seed=42)
mnli.select(list(range(len(mnli))))
```
rai... | lhoestq | https://github.com/huggingface/datasets/issues/583 | null | false |
695,126,456 | 582 | Allow for PathLike objects | closed | [] | 2020-09-07T13:54:51 | 2020-09-08T07:45:17 | 2020-09-08T07:45:17 | Using PathLike objects as input for `load_dataset` does not seem to work. The following will throw an error.
```python
files = list(Path(r"D:\corpora\yourcorpus").glob("*.txt"))
dataset = load_dataset("text", data_files=files)
```
Traceback:
```
Traceback (most recent call last):
File "C:/dev/python/dut... | BramVanroy | https://github.com/huggingface/datasets/issues/582 | null | false |
695,120,517 | 581 | Better error message when input file does not exist | closed | [] | 2020-09-07T13:47:59 | 2020-09-09T09:00:07 | 2020-09-09T09:00:07 | In the following scenario, when `data_files` is an empty list, the stack trace and error message could be improved. This can probably be solved by checking for each file whether it actually exists and/or whether the argument is not false-y.
```python
dataset = load_dataset("text", data_files=[])
```
Example err... | BramVanroy | https://github.com/huggingface/datasets/issues/581 | null | false |
694,954,551 | 580 | nlp re-creates already-there caches when using a script, but not within a shell | closed | [] | 2020-09-07T10:23:50 | 2020-09-07T15:19:09 | 2020-09-07T14:26:41 | `nlp` keeps creating new caches for the same file when launching `filter` from a script, and behaves correctly from within the shell.
Example: try running
```
import nlp
hans_easy_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 0)
hans_hard_data = nlp.load_dataset('hans', s... | TevenLeScao | https://github.com/huggingface/datasets/issues/580 | null | false |
694,947,599 | 579 | Doc metrics | closed | [] | 2020-09-07T10:15:24 | 2020-09-10T13:06:11 | 2020-09-10T13:06:10 | Adding documentation on metrics loading/using/sharing | thomwolf | https://github.com/huggingface/datasets/pull/579 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/579",
"html_url": "https://github.com/huggingface/datasets/pull/579",
"diff_url": "https://github.com/huggingface/datasets/pull/579.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/579.patch",
"merged_at": "2020-09-10T13:06:10"... | true |
694,849,940 | 578 | Add CommonGen Dataset | closed | [] | 2020-09-07T08:17:17 | 2020-09-07T11:50:29 | 2020-09-07T11:49:07 | CC Authors:
@yuchenlin @MichaelZhouwang | JetRunner | https://github.com/huggingface/datasets/pull/578 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/578",
"html_url": "https://github.com/huggingface/datasets/pull/578",
"diff_url": "https://github.com/huggingface/datasets/pull/578.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/578.patch",
"merged_at": "2020-09-07T11:49:07"... | true |
694,607,148 | 577 | Some languages in wikipedia dataset are not loading | closed | [] | 2020-09-07T01:16:29 | 2023-04-11T22:50:48 | 2022-10-11T11:16:04 | Hi,
I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them:
```
import nlp
langs = ['ar'. 'af', '... | gaguilar | https://github.com/huggingface/datasets/issues/577 | null | false |
694,348,645 | 576 | Fix the code block in doc | closed | [] | 2020-09-06T11:40:55 | 2020-09-07T07:37:32 | 2020-09-07T07:37:18 | JetRunner | https://github.com/huggingface/datasets/pull/576 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/576",
"html_url": "https://github.com/huggingface/datasets/pull/576",
"diff_url": "https://github.com/huggingface/datasets/pull/576.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/576.patch",
"merged_at": "2020-09-07T07:37:18"... | true | |
693,691,611 | 575 | Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading. | closed | [] | 2020-09-04T21:46:25 | 2020-09-22T10:41:36 | 2020-09-22T10:41:36 | Hi,
I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset:
```
>>> from nlp import load_dataset
>>> dataset = load_dataset('glue', 'mrpc', split='train')
```
However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the la... | sudarshan85 | https://github.com/huggingface/datasets/issues/575 | null | false |
693,364,853 | 574 | Add modules cache | closed | [] | 2020-09-04T16:30:03 | 2020-09-22T10:27:08 | 2020-09-07T09:01:35 | As discusses in #554 , we should use a module cache directory outside of the python packages directory since we may not have write permissions.
I added a new HF_MODULES_PATH directory that is added to the python path when doing `import nlp`.
In this directory, a module `nlp_modules` is created so that datasets can ... | lhoestq | https://github.com/huggingface/datasets/pull/574 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/574",
"html_url": "https://github.com/huggingface/datasets/pull/574",
"diff_url": "https://github.com/huggingface/datasets/pull/574.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/574.patch",
"merged_at": "2020-09-07T09:01:35"... | true |
693,091,790 | 573 | Faster caching for text dataset | closed | [] | 2020-09-04T11:58:34 | 2020-09-04T12:53:24 | 2020-09-04T12:53:23 | As mentioned in #546 and #548 , hashing `data_files` contents to get the cache directory name for a text dataset can take a long time.
To make it faster I changed the hashing so that it takes into account the `path` and the `last modified timestamp` of each data file, instead of iterating through the content of each... | lhoestq | https://github.com/huggingface/datasets/pull/573 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/573",
"html_url": "https://github.com/huggingface/datasets/pull/573",
"diff_url": "https://github.com/huggingface/datasets/pull/573.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/573.patch",
"merged_at": "2020-09-04T12:53:23"... | true |
692,598,231 | 572 | Add CLUE Benchmark (11 datasets) | closed | [] | 2020-09-04T01:57:40 | 2020-09-07T09:59:11 | 2020-09-07T09:59:10 | Add 11 tasks of [CLUE](https://github.com/CLUEbenchmark/CLUE). | JetRunner | https://github.com/huggingface/datasets/pull/572 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/572",
"html_url": "https://github.com/huggingface/datasets/pull/572",
"diff_url": "https://github.com/huggingface/datasets/pull/572.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/572.patch",
"merged_at": "2020-09-07T09:59:10"... | true |
692,109,287 | 571 | Serialization | closed | [] | 2020-09-03T16:21:38 | 2020-09-07T07:46:08 | 2020-09-07T07:46:07 | I added `save` and `load` method to serialize/deserialize a dataset object in a folder.
It moves the arrow files there (or write them if the tables were in memory), and saves the pickle state in a json file `state.json`, except the info that are in a separate file `dataset_info.json`.
Example:
```python
import ... | lhoestq | https://github.com/huggingface/datasets/pull/571 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/571",
"html_url": "https://github.com/huggingface/datasets/pull/571",
"diff_url": "https://github.com/huggingface/datasets/pull/571.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/571.patch",
"merged_at": "2020-09-07T07:46:07"... | true |
691,846,397 | 570 | add reuters21578 dataset | closed | [] | 2020-09-03T10:25:47 | 2020-09-03T10:46:52 | 2020-09-03T10:46:51 | Reopen a PR this the merge. | jplu | https://github.com/huggingface/datasets/pull/570 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/570",
"html_url": "https://github.com/huggingface/datasets/pull/570",
"diff_url": "https://github.com/huggingface/datasets/pull/570.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/570.patch",
"merged_at": "2020-09-03T10:46:51"... | true |
691,832,720 | 569 | Revert "add reuters21578 dataset" | closed | [] | 2020-09-03T10:06:16 | 2020-09-03T10:07:13 | 2020-09-03T10:07:12 | Reverts huggingface/nlp#471 | jplu | https://github.com/huggingface/datasets/pull/569 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/569",
"html_url": "https://github.com/huggingface/datasets/pull/569",
"diff_url": "https://github.com/huggingface/datasets/pull/569.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/569.patch",
"merged_at": "2020-09-03T10:07:12"... | true |
691,638,656 | 568 | `metric.compute` throws `ArrowInvalid` error | closed | [] | 2020-09-03T04:56:57 | 2020-10-05T16:33:53 | 2020-10-05T16:33:53 | I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0`
```
File "/home/beltagy/trainer.py", line 92, in validation_step
rouge_scores = rouge.compute(predictions=generated_str, references=gold_st... | ibeltagy | https://github.com/huggingface/datasets/issues/568 | null | false |
691,430,245 | 567 | Fix BLEURT metrics for backward compatibility | closed | [] | 2020-09-02T21:22:35 | 2020-09-03T07:29:52 | 2020-09-03T07:29:50 | Fix #565 | thomwolf | https://github.com/huggingface/datasets/pull/567 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/567",
"html_url": "https://github.com/huggingface/datasets/pull/567",
"diff_url": "https://github.com/huggingface/datasets/pull/567.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/567.patch",
"merged_at": "2020-09-03T07:29:50"... | true |
691,160,208 | 566 | Remove logger pickling to fix gg colab issues | closed | [] | 2020-09-02T16:16:21 | 2020-09-03T16:31:53 | 2020-09-03T16:31:52 | A `logger` objects are not picklable in google colab, contrary to `logger` objects in jupyter notebooks or in python shells.
It creates some issues in google colab right now.
Indeed by calling any `Dataset` method, the fingerprint update pickles the transform function, and as the logger comes with it, it results in... | lhoestq | https://github.com/huggingface/datasets/pull/566 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/566",
"html_url": "https://github.com/huggingface/datasets/pull/566",
"diff_url": "https://github.com/huggingface/datasets/pull/566.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/566.patch",
"merged_at": "2020-09-03T16:31:52"... | true |
691,039,121 | 565 | No module named 'nlp.logging' | closed | [] | 2020-09-02T13:49:50 | 2020-09-03T07:29:50 | 2020-09-03T07:29:50 | Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing?
```
>>> import nlp
2020-09-02 13:47:09.210310: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic l... | melody-ju | https://github.com/huggingface/datasets/issues/565 | null | false |
691,000,020 | 564 | Wait for writing in distributed metrics | closed | [] | 2020-09-02T12:58:50 | 2020-09-09T09:13:23 | 2020-09-09T09:13:22 | There were CI bugs where a distributed metric would try to read all the files in process 0 while the other processes haven't started writing.
To fix that I added a custom locking mechanism that waits for the file to exist before trying to read it | lhoestq | https://github.com/huggingface/datasets/pull/564 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/564",
"html_url": "https://github.com/huggingface/datasets/pull/564",
"diff_url": "https://github.com/huggingface/datasets/pull/564.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/564.patch",
"merged_at": "2020-09-09T09:13:22"... | true |
690,908,674 | 563 | [Large datasets] Speed up download and processing | closed | [] | 2020-09-02T10:31:54 | 2020-09-09T09:03:33 | 2020-09-09T09:03:32 | Various improvements to speed-up creation and processing of large scale datasets.
Currently:
- distributed downloads
- remove etag from datafiles hashes to spare a request when restarting a failed download | thomwolf | https://github.com/huggingface/datasets/pull/563 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/563",
"html_url": "https://github.com/huggingface/datasets/pull/563",
"diff_url": "https://github.com/huggingface/datasets/pull/563.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/563.patch",
"merged_at": "2020-09-09T09:03:32"... | true |
690,907,604 | 562 | [Reproductibility] Allow to pin versions of datasets/metrics | closed | [] | 2020-09-02T10:30:13 | 2023-09-24T09:49:42 | 2020-09-09T13:04:54 | Repurpose the `version` attribute in datasets and metrics to let the user pin a specific version of datasets and metric scripts:
```
dataset = nlp.load_dataset('squad', version='1.0.0')
metric = nlp.load_metric('squad', version='1.0.0')
```
Notes:
- version number are the release version of the library
- curre... | thomwolf | https://github.com/huggingface/datasets/pull/562 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/562",
"html_url": "https://github.com/huggingface/datasets/pull/562",
"diff_url": "https://github.com/huggingface/datasets/pull/562.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/562.patch",
"merged_at": null
} | true |
690,871,415 | 561 | Made `share_dataset` more readable | closed | [] | 2020-09-02T09:34:48 | 2020-09-03T09:00:30 | 2020-09-03T09:00:29 | TevenLeScao | https://github.com/huggingface/datasets/pull/561 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/561",
"html_url": "https://github.com/huggingface/datasets/pull/561",
"diff_url": "https://github.com/huggingface/datasets/pull/561.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/561.patch",
"merged_at": "2020-09-03T09:00:29"... | true | |
690,488,764 | 560 | Using custom DownloadConfig results in an error | closed | [] | 2020-09-01T22:23:02 | 2022-10-04T17:23:45 | 2022-10-04T17:23:45 | ## Version / Environment
Ubuntu 18.04
Python 3.6.8
nlp 0.4.0
## Description
Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` function, this results in an error.
## How to reprodu... | ynouri | https://github.com/huggingface/datasets/issues/560 | null | false |
690,411,263 | 559 | Adding the KILT knowledge source and tasks | closed | [] | 2020-09-01T20:05:13 | 2020-09-04T18:05:47 | 2020-09-04T18:05:47 | This adds Wikipedia pre-processed for KILT, as well as the task data. Only the question IDs are provided for TriviaQA, but they can easily be mapped back with:
```
import nlp
kilt_wikipedia = nlp.load_dataset('kilt_wikipedia')
kilt_tasks = nlp.load_dataset('kilt_tasks')
triviaqa = nlp.load_dataset('trivia_qa',... | yjernite | https://github.com/huggingface/datasets/pull/559 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/559",
"html_url": "https://github.com/huggingface/datasets/pull/559",
"diff_url": "https://github.com/huggingface/datasets/pull/559.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/559.patch",
"merged_at": "2020-09-04T18:05:47"... | true |
690,318,105 | 558 | Rerun pip install -e | closed | [] | 2020-09-01T17:24:39 | 2020-09-01T17:24:51 | 2020-09-01T17:24:50 | Hopefully it fixes the github actions | lhoestq | https://github.com/huggingface/datasets/pull/558 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/558",
"html_url": "https://github.com/huggingface/datasets/pull/558",
"diff_url": "https://github.com/huggingface/datasets/pull/558.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/558.patch",
"merged_at": "2020-09-01T17:24:50"... | true |
690,220,135 | 557 | Fix a few typos | closed | [] | 2020-09-01T15:03:24 | 2020-09-02T07:39:08 | 2020-09-02T07:39:07 | julien-c | https://github.com/huggingface/datasets/pull/557 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/557",
"html_url": "https://github.com/huggingface/datasets/pull/557",
"diff_url": "https://github.com/huggingface/datasets/pull/557.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/557.patch",
"merged_at": "2020-09-02T07:39:06"... | true | |
690,218,423 | 556 | Add DailyDialog | closed | [] | 2020-09-01T15:01:15 | 2020-09-03T15:42:03 | 2020-09-03T15:38:39 | http://yanran.li/dailydialog.html
https://arxiv.org/pdf/1710.03957.pdf
| julien-c | https://github.com/huggingface/datasets/pull/556 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/556",
"html_url": "https://github.com/huggingface/datasets/pull/556",
"diff_url": "https://github.com/huggingface/datasets/pull/556.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/556.patch",
"merged_at": "2020-09-03T15:38:39"... | true |
690,197,725 | 555 | Upgrade pip in benchmark github action | closed | [] | 2020-09-01T14:37:26 | 2020-09-01T15:26:16 | 2020-09-01T15:26:15 | It looks like it fixes the `import nlp` issue we have | lhoestq | https://github.com/huggingface/datasets/pull/555 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/555",
"html_url": "https://github.com/huggingface/datasets/pull/555",
"diff_url": "https://github.com/huggingface/datasets/pull/555.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/555.patch",
"merged_at": "2020-09-01T15:26:15"... | true |
690,173,214 | 554 | nlp downloads to its module path | closed | [] | 2020-09-01T14:06:14 | 2020-09-11T06:19:24 | 2020-09-11T06:19:24 | I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems:
```>>> import nlp
>>> squad_dataset = nlp.load_dataset('squad')
... | danieldk | https://github.com/huggingface/datasets/issues/554 | null | false |
690,143,182 | 553 | [Fix GitHub Actions] test adding tmate | closed | [] | 2020-09-01T13:28:03 | 2021-05-05T18:24:38 | 2020-09-03T09:01:13 | thomwolf | https://github.com/huggingface/datasets/pull/553 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/553",
"html_url": "https://github.com/huggingface/datasets/pull/553",
"diff_url": "https://github.com/huggingface/datasets/pull/553.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/553.patch",
"merged_at": null
} | true | |
690,079,429 | 552 | Add multiprocessing | closed | [] | 2020-09-01T11:56:17 | 2020-09-22T15:11:56 | 2020-09-02T10:01:25 | Adding multiprocessing to `.map`
It works in 3 steps:
- shard the dataset in `num_proc` shards
- spawn one process per shard and call `map` on them
- concatenate the resulting datasets
Example of usage:
```python
from nlp import load_dataset
dataset = load_dataset("squad", split="train")
def function... | lhoestq | https://github.com/huggingface/datasets/pull/552 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/552",
"html_url": "https://github.com/huggingface/datasets/pull/552",
"diff_url": "https://github.com/huggingface/datasets/pull/552.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/552.patch",
"merged_at": "2020-09-02T10:01:25"... | true |
690,034,762 | 551 | added HANS dataset | closed | [] | 2020-09-01T10:42:02 | 2020-09-01T12:17:10 | 2020-09-01T12:17:10 | Adds the [HANS](https://github.com/tommccoy1/hans) dataset to evaluate NLI systems. | TevenLeScao | https://github.com/huggingface/datasets/pull/551 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/551",
"html_url": "https://github.com/huggingface/datasets/pull/551",
"diff_url": "https://github.com/huggingface/datasets/pull/551.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/551.patch",
"merged_at": "2020-09-01T12:17:10"... | true |
689,775,914 | 550 | [BUGFIX] Solving mismatched checksum issue for the LinCE dataset (#539) | closed | [] | 2020-09-01T03:27:03 | 2020-09-03T09:06:01 | 2020-09-03T09:06:01 | Hi,
I have added the updated `dataset_infos.json` file for the LinCE benchmark. This update is to fix the mismatched checksum bug #539 for one of the datasets in the LinCE benchmark. To update the file, I run this command from the nlp root directory:
```
python nlp-cli test ./datasets/lince --save_infos --all_co... | gaguilar | https://github.com/huggingface/datasets/pull/550 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/550",
"html_url": "https://github.com/huggingface/datasets/pull/550",
"diff_url": "https://github.com/huggingface/datasets/pull/550.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/550.patch",
"merged_at": "2020-09-03T09:06:01"... | true |
689,766,465 | 549 | Fix bleurt logging import | closed | [] | 2020-09-01T03:01:25 | 2020-09-03T18:04:46 | 2020-09-03T09:04:20 | Bleurt started throwing an error in some code we have.
This looks like the fix but...
It's also unnerving that even a prebuilt docker image with pinned versions can be working 1 day and then fail the next (especially for production systems).
Any way for us to pin your metrics code so that they are guaranteed not... | jbragg | https://github.com/huggingface/datasets/pull/549 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/549",
"html_url": "https://github.com/huggingface/datasets/pull/549",
"diff_url": "https://github.com/huggingface/datasets/pull/549.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/549.patch",
"merged_at": null
} | true |
689,285,996 | 548 | [Breaking] Switch text loading to multi-threaded PyArrow loading | closed | [] | 2020-08-31T15:15:41 | 2020-09-08T10:19:58 | 2020-09-08T10:19:57 | Test if we can get better performances for large-scale text datasets by using multi-threaded text file loading based on Apache Arrow multi-threaded CSV loader.
If it works ok, it would fix #546.
**Breaking change**:
The text lines now do not include final line-breaks anymore. | thomwolf | https://github.com/huggingface/datasets/pull/548 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/548",
"html_url": "https://github.com/huggingface/datasets/pull/548",
"diff_url": "https://github.com/huggingface/datasets/pull/548.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/548.patch",
"merged_at": "2020-09-08T10:19:57"... | true |
689,268,589 | 547 | [Distributed] Making loading distributed datasets a bit safer | closed | [] | 2020-08-31T14:51:34 | 2020-08-31T15:16:30 | 2020-08-31T15:16:29 | Add some file-locks during dataset loading | thomwolf | https://github.com/huggingface/datasets/pull/547 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/547",
"html_url": "https://github.com/huggingface/datasets/pull/547",
"diff_url": "https://github.com/huggingface/datasets/pull/547.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/547.patch",
"merged_at": "2020-08-31T15:16:29"... | true |
689,186,526 | 546 | Very slow data loading on large dataset | closed | [] | 2020-08-31T12:57:23 | 2024-01-02T20:26:24 | 2020-09-08T10:19:57 | I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data.
It has been 8 hours and still, it is on the loading steps.
It does work when the text dataset size is small about 1 GB, but it doesn't scale.
It also uses a single thread during the data loading step.
```
train_fil... | agemagician | https://github.com/huggingface/datasets/issues/546 | null | false |
689,138,878 | 545 | New release coming up for this library | closed | [] | 2020-08-31T11:37:38 | 2021-01-13T10:59:04 | 2021-01-13T10:59:04 | Hi all,
A few words on the roadmap for this library.
The next release will be a big one and is planed at the end of this week.
In addition to the support for indexed datasets (useful for non-parametric models like REALM, RAG, DPR, knn-LM and many other fast dataset retrieval technics), it will:
- have support f... | thomwolf | https://github.com/huggingface/datasets/issues/545 | null | false |
689,062,519 | 544 | [Distributed] Fix load_dataset error when multiprocessing + add test | closed | [] | 2020-08-31T09:30:10 | 2020-08-31T11:15:11 | 2020-08-31T11:15:10 | Fix #543 + add test | thomwolf | https://github.com/huggingface/datasets/pull/544 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/544",
"html_url": "https://github.com/huggingface/datasets/pull/544",
"diff_url": "https://github.com/huggingface/datasets/pull/544.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/544.patch",
"merged_at": "2020-08-31T11:15:10"... | true |
688,644,407 | 543 | nlp.load_dataset is not safe for multi processes when loading from local files | closed | [] | 2020-08-30T03:20:34 | 2020-08-31T11:15:10 | 2020-08-31T11:15:10 | Loading from local files, e.g., `dataset = nlp.load_dataset('csv', data_files=['file_1.csv', 'file_2.csv'])`
concurrently from multiple processes, will raise `FileExistsError` from builder's line 430, https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/src/nlp/builder.py#L423-L438
Likel... | luyug | https://github.com/huggingface/datasets/issues/543 | null | false |
688,555,036 | 542 | Add TensorFlow example | closed | [] | 2020-08-29T15:39:27 | 2020-08-31T09:49:20 | 2020-08-31T09:49:19 | Update the Quick Tour documentation in order to add the TensorFlow equivalent source code for the classification example. Now it is possible to select either the code in PyTorch or in TensorFlow in the Quick tour. | jplu | https://github.com/huggingface/datasets/pull/542 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/542",
"html_url": "https://github.com/huggingface/datasets/pull/542",
"diff_url": "https://github.com/huggingface/datasets/pull/542.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/542.patch",
"merged_at": "2020-08-31T09:49:19"... | true |
688,521,224 | 541 | Best practices for training tokenizers with nlp | closed | [] | 2020-08-29T12:06:49 | 2022-10-04T17:28:04 | 2022-10-04T17:28:04 | Hi, thank you for developing this library.
What do you think are the best practices for training tokenizers using `nlp`? In the document and examples, I could only find pre-trained tokenizers used. | moskomule | https://github.com/huggingface/datasets/issues/541 | null | false |
688,475,884 | 540 | [BUGFIX] Fix Race Dataset Checksum bug | closed | [] | 2020-08-29T07:00:10 | 2020-09-18T11:42:20 | 2020-09-18T11:42:20 | In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :)
Moreover, I have added some descriptions. | abarbosa94 | https://github.com/huggingface/datasets/pull/540 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/540",
"html_url": "https://github.com/huggingface/datasets/pull/540",
"diff_url": "https://github.com/huggingface/datasets/pull/540.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/540.patch",
"merged_at": "2020-09-18T11:42:20"... | true |
688,323,602 | 539 | [Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data | closed | [] | 2020-08-28T19:55:51 | 2020-09-03T16:34:02 | 2020-09-03T16:34:01 | Hi,
There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset.
How can I update the checksum of the library to solve this issue? The error is below and it also appea... | gaguilar | https://github.com/huggingface/datasets/issues/539 | null | false |
688,015,912 | 538 | [logging] Add centralized logging - Bump-up cache loads to warnings | closed | [] | 2020-08-28T11:42:29 | 2020-08-31T11:42:51 | 2020-08-31T11:42:51 | Add a `nlp.logging` module to set the global logging level easily. The verbosity level also controls the tqdm bars (disabled when set higher than INFO).
You can use:
```
nlp.logging.set_verbosity(verbosity: int)
nlp.logging.set_verbosity_info()
nlp.logging.set_verbosity_warning()
nlp.logging.set_verbosity_debug... | thomwolf | https://github.com/huggingface/datasets/pull/538 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/538",
"html_url": "https://github.com/huggingface/datasets/pull/538",
"diff_url": "https://github.com/huggingface/datasets/pull/538.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/538.patch",
"merged_at": "2020-08-31T11:42:50"... | true |
687,614,699 | 537 | [Dataset] RACE dataset Checksums error | closed | [] | 2020-08-27T23:58:16 | 2020-09-18T12:07:04 | 2020-09-18T12:07:04 | Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
```
----------------------------------... | abarbosa94 | https://github.com/huggingface/datasets/issues/537 | null | false |
687,378,332 | 536 | Fingerprint | closed | [] | 2020-08-27T16:27:09 | 2020-08-31T14:20:40 | 2020-08-31T14:20:39 | This PR is a continuation of #513 , in which many in-place functions were introduced or updated (cast_, flatten_) etc.
However the caching didn't handle these changes. Indeed the caching took into account only the previous cache file name of the table, and not the possible in-place transforms of the table.
To fix t... | lhoestq | https://github.com/huggingface/datasets/pull/536 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/536",
"html_url": "https://github.com/huggingface/datasets/pull/536",
"diff_url": "https://github.com/huggingface/datasets/pull/536.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/536.patch",
"merged_at": "2020-08-31T14:20:39"... | true |
686,238,315 | 535 | Benchmarks | closed | [] | 2020-08-26T11:21:26 | 2020-08-27T08:40:00 | 2020-08-27T08:39:59 | Adding some benchmarks with DVC/CML
To add a new tracked benchmark:
- create a new python benchmarking script in `./benchmarks/`. The script can use the utilities in `./benchmarks/utils.py` and should output a JSON file with results in `./benchmarks/results/`.
- add a new pipeline stage in [dvc.yaml](./dvc.yaml) w... | thomwolf | https://github.com/huggingface/datasets/pull/535 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/535",
"html_url": "https://github.com/huggingface/datasets/pull/535",
"diff_url": "https://github.com/huggingface/datasets/pull/535.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/535.patch",
"merged_at": "2020-08-27T08:39:59"... | true |
686,115,912 | 534 | `list_datasets()` is broken. | closed | [] | 2020-08-26T08:19:01 | 2020-08-27T06:31:11 | 2020-08-27T06:31:11 | version = '0.4.0'
`list_datasets()` is broken. It results in the following error :
```
In [3]: nlp.list_datasets()
Out[3]: ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/.virtualenvs/san-lgUCsFg_/lib/py... | ashutosh-dwivedi-e3502 | https://github.com/huggingface/datasets/issues/534 | null | false |
685,585,914 | 533 | Fix ArrayXD for pyarrow 0.17.1 by using non fixed length list arrays | closed | [] | 2020-08-25T15:32:44 | 2020-08-26T08:02:24 | 2020-08-26T08:02:23 | It should fix the CI problems in #513 | lhoestq | https://github.com/huggingface/datasets/pull/533 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/533",
"html_url": "https://github.com/huggingface/datasets/pull/533",
"diff_url": "https://github.com/huggingface/datasets/pull/533.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/533.patch",
"merged_at": "2020-08-26T08:02:23"... | true |
685,540,614 | 532 | File exists error when used with TPU | open | [] | 2020-08-25T14:36:38 | 2020-09-01T12:14:56 | null | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | go-inoue | https://github.com/huggingface/datasets/issues/532 | null | false |
685,291,036 | 531 | add concatenate_datasets to the docs | closed | [] | 2020-08-25T08:40:05 | 2020-08-25T09:02:20 | 2020-08-25T09:02:19 | lhoestq | https://github.com/huggingface/datasets/pull/531 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/531",
"html_url": "https://github.com/huggingface/datasets/pull/531",
"diff_url": "https://github.com/huggingface/datasets/pull/531.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/531.patch",
"merged_at": "2020-08-25T09:02:19"... | true | |
684,825,612 | 530 | use ragged tensor by default | closed | [] | 2020-08-24T17:06:15 | 2021-10-22T19:38:40 | 2020-08-24T19:22:25 | I think it's better if it's clear whether the returned tensor is ragged or not when the type is set to tensorflow.
Previously it was a tensor (not ragged) if numpy could stack the output (which can change depending on the batch of example you take), which make things difficult to handle, as it may sometimes return a r... | lhoestq | https://github.com/huggingface/datasets/pull/530 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/530",
"html_url": "https://github.com/huggingface/datasets/pull/530",
"diff_url": "https://github.com/huggingface/datasets/pull/530.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/530.patch",
"merged_at": "2020-08-24T19:22:25"... | true |
684,797,157 | 529 | Add MLSUM | closed | [] | 2020-08-24T16:18:35 | 2020-08-26T08:04:11 | 2020-08-26T08:04:11 | Hello (again :) !),
So, I started a new branch because of a [rebase issue](https://github.com/huggingface/nlp/pull/463), sorry for the mess.
However, the command `pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_mlsum` still fails because there is no default language dataset : the s... | RachelKer | https://github.com/huggingface/datasets/pull/529 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/529",
"html_url": "https://github.com/huggingface/datasets/pull/529",
"diff_url": "https://github.com/huggingface/datasets/pull/529.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/529.patch",
"merged_at": "2020-08-26T08:04:10"... | true |
684,673,673 | 528 | fix missing variable names in docs | closed | [] | 2020-08-24T13:31:48 | 2020-08-25T09:04:04 | 2020-08-25T09:04:03 | fix #524 | lhoestq | https://github.com/huggingface/datasets/pull/528 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/528",
"html_url": "https://github.com/huggingface/datasets/pull/528",
"diff_url": "https://github.com/huggingface/datasets/pull/528.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/528.patch",
"merged_at": "2020-08-25T09:04:03"... | true |
684,632,930 | 527 | Fix config used for slow test on real dataset | closed | [] | 2020-08-24T12:39:34 | 2020-08-25T09:20:45 | 2020-08-25T09:20:44 | As noticed in #470, #474, #476, #504 , the slow test `test_load_real_dataset` couldn't run on datasets that require config parameters.
To fix that I replaced it with one test with the first config of BUILDER_CONFIGS `test_load_real_dataset`, and another test that runs all of the configs in BUILDER_CONFIGS `test_load... | lhoestq | https://github.com/huggingface/datasets/pull/527 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/527",
"html_url": "https://github.com/huggingface/datasets/pull/527",
"diff_url": "https://github.com/huggingface/datasets/pull/527.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/527.patch",
"merged_at": "2020-08-25T09:20:44"... | true |
684,615,455 | 526 | Returning None instead of "python" if dataset is unformatted | closed | [] | 2020-08-24T12:10:35 | 2020-08-24T12:50:43 | 2020-08-24T12:50:42 | Following the discussion on Slack, this small fix ensures that calling `dataset.set_format(type=dataset.format["type"])` works properly. Slightly breaking as calling `dataset.format` when the dataset is unformatted will return `None` instead of `python`. | TevenLeScao | https://github.com/huggingface/datasets/pull/526 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/526",
"html_url": "https://github.com/huggingface/datasets/pull/526",
"diff_url": "https://github.com/huggingface/datasets/pull/526.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/526.patch",
"merged_at": "2020-08-24T12:50:42"... | true |
683,875,483 | 525 | wmt download speed example | closed | [] | 2020-08-21T23:29:06 | 2022-10-04T17:45:39 | 2022-10-04T17:45:39 | Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-en')
```
Downloads at 49.1 K... | sshleifer | https://github.com/huggingface/datasets/issues/525 | null | false |
683,686,359 | 524 | Some docs are missing parameter names | closed | [] | 2020-08-21T16:47:34 | 2020-08-25T09:04:03 | 2020-08-25T09:04:03 | See https://huggingface.co/nlp/master/package_reference/main_classes.html#nlp.Dataset.map. I believe this is because the parameter names are enclosed in backticks in the docstrings, maybe it's an old docstring format that doesn't work with the current Sphinx version. | jarednielsen | https://github.com/huggingface/datasets/issues/524 | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.