id int64 599M 3.48B | number int64 1 7.8k | title stringlengths 1 290 | state stringclasses 2
values | comments listlengths 0 30 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-10-05 06:37:50 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-10-05 10:32:43 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-10-01 13:56:03 ⌀ | body stringlengths 0 228k ⌀ | user stringlengths 3 26 | html_url stringlengths 46 51 | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
606,109,196 | 18 | Updating caching mechanism - Allow dependency in dataset processing scripts - Fix style and quality in the repo | closed | [] | 2020-04-24T07:39:48 | 2020-04-29T15:27:28 | 2020-04-28T16:06:28 | This PR has a lot of content (might be hard to review, sorry, in particular because I fixed the style in the repo at the same time).
# Style & quality:
You can now install the style and quality tools with `pip install -e .[quality]`. This will install black, the compatible version of sort and flake8.
You can then ... | thomwolf | https://github.com/huggingface/datasets/pull/18 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/18",
"html_url": "https://github.com/huggingface/datasets/pull/18",
"diff_url": "https://github.com/huggingface/datasets/pull/18.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/18.patch",
"merged_at": "2020-04-28T16:06:28"
} | true |
605,753,027 | 17 | Add Pandas as format type | closed | [] | 2020-04-23T18:20:14 | 2020-04-27T18:07:50 | 2020-04-27T18:07:48 | As detailed in the title ^^ | jplu | https://github.com/huggingface/datasets/pull/17 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/17",
"html_url": "https://github.com/huggingface/datasets/pull/17",
"diff_url": "https://github.com/huggingface/datasets/pull/17.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/17.patch",
"merged_at": "2020-04-27T18:07:48"
} | true |
605,661,462 | 16 | create our own DownloadManager | closed | [] | 2020-04-23T16:08:07 | 2021-05-05T18:25:24 | 2020-04-25T21:25:10 | I tried to create our own - and way simpler - download manager, by replacing all the complicated stuff with our own `cached_path` solution.
With this implementation, I tried `dataset = nlp.load('squad')` and it seems to work fine.
For the implementation, what I did exactly:
- I copied the old download manager
- I... | lhoestq | https://github.com/huggingface/datasets/pull/16 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/16",
"html_url": "https://github.com/huggingface/datasets/pull/16",
"diff_url": "https://github.com/huggingface/datasets/pull/16.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/16.patch",
"merged_at": "2020-04-25T21:25:10"
} | true |
604,906,708 | 15 | [Tests] General Test Design for all dataset scripts | closed | [] | 2020-04-22T16:46:01 | 2022-10-04T09:31:54 | 2020-04-27T14:48:02 | The general idea is similar to how testing is done in `transformers`. There is one general `test_dataset_common.py` file which has a `DatasetTesterMixin` class. This class implements all of the logic that can be used in a generic way for all dataset classes. The idea is to keep each individual dataset test file as mini... | patrickvonplaten | https://github.com/huggingface/datasets/pull/15 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/15",
"html_url": "https://github.com/huggingface/datasets/pull/15",
"diff_url": "https://github.com/huggingface/datasets/pull/15.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/15.patch",
"merged_at": "2020-04-27T14:48:02"
} | true |
604,761,315 | 14 | [Download] Only create dir if not already exist | closed | [] | 2020-04-22T13:32:51 | 2022-10-04T09:31:50 | 2020-04-23T08:27:33 | This was quite annoying to find out :D.
Some datasets have save in the same directory. So we should only create a new directory if it doesn't already exist. | patrickvonplaten | https://github.com/huggingface/datasets/pull/14 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/14",
"html_url": "https://github.com/huggingface/datasets/pull/14",
"diff_url": "https://github.com/huggingface/datasets/pull/14.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/14.patch",
"merged_at": "2020-04-23T08:27:33"
} | true |
604,547,951 | 13 | [Make style] | closed | [] | 2020-04-22T08:10:06 | 2024-11-20T13:42:58 | 2020-04-23T13:02:22 | Added Makefile and applied make style to all.
make style runs the following code:
```
style:
black --line-length 119 --target-version py35 src
isort --recursive src
```
It's the same code that is run in `transformers`. | patrickvonplaten | https://github.com/huggingface/datasets/pull/13 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/13",
"html_url": "https://github.com/huggingface/datasets/pull/13",
"diff_url": "https://github.com/huggingface/datasets/pull/13.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/13.patch",
"merged_at": "2020-04-23T13:02:22"
} | true |
604,518,583 | 12 | [Map Function] add assert statement if map function does not return dict or None | closed | [] | 2020-04-22T07:21:24 | 2022-10-04T09:31:53 | 2020-04-24T06:29:03 | IMO, if a function is provided that is not a print statement (-> returns variable of type `None`) or a function that updates the datasets (-> returns variable of type `dict`), then a `TypeError` should be raised.
Not sure whether you had cases in mind where the user should do something else @thomwolf , but I think ... | patrickvonplaten | https://github.com/huggingface/datasets/pull/12 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/12",
"html_url": "https://github.com/huggingface/datasets/pull/12",
"diff_url": "https://github.com/huggingface/datasets/pull/12.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/12.patch",
"merged_at": "2020-04-24T06:29:03"
} | true |
603,921,624 | 11 | [Convert TFDS to HFDS] Extend script to also allow just converting a single file | closed | [] | 2020-04-21T11:25:33 | 2022-10-04T09:31:46 | 2020-04-21T20:47:00 | Adds another argument to be able to convert only a single file | patrickvonplaten | https://github.com/huggingface/datasets/pull/11 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/11",
"html_url": "https://github.com/huggingface/datasets/pull/11",
"diff_url": "https://github.com/huggingface/datasets/pull/11.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/11.patch",
"merged_at": "2020-04-21T20:47:00"
} | true |
603,909,327 | 10 | Name json file "squad.json" instead of "squad.py.json" | closed | [] | 2020-04-21T11:04:28 | 2022-10-04T09:31:44 | 2020-04-21T20:48:06 | patrickvonplaten | https://github.com/huggingface/datasets/pull/10 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/10",
"html_url": "https://github.com/huggingface/datasets/pull/10",
"diff_url": "https://github.com/huggingface/datasets/pull/10.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/10.patch",
"merged_at": "2020-04-21T20:48:06"
} | true | |
603,894,874 | 9 | [Clean up] Datasets | closed | [] | 2020-04-21T10:39:56 | 2022-10-04T09:31:42 | 2020-04-21T20:49:58 | Clean up `nlp/datasets` folder.
As I understood, eventually the `nlp/datasets` shall not exist anymore at all.
The folder `nlp/datasets/nlp` is kept for the moment, but won't be needed in the future, since it will live on S3 (actually it already does) at: `https://s3.console.aws.amazon.com/s3/buckets/datasets.h... | patrickvonplaten | https://github.com/huggingface/datasets/pull/9 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/9",
"html_url": "https://github.com/huggingface/datasets/pull/9",
"diff_url": "https://github.com/huggingface/datasets/pull/9.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/9.patch",
"merged_at": "2020-04-21T20:49:58"
} | true |
601,783,243 | 8 | Fix issue 6: error when the citation is missing in the DatasetInfo | closed | [] | 2020-04-17T08:04:26 | 2020-04-29T09:27:11 | 2020-04-20T13:24:12 | jplu | https://github.com/huggingface/datasets/pull/8 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8",
"html_url": "https://github.com/huggingface/datasets/pull/8",
"diff_url": "https://github.com/huggingface/datasets/pull/8.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8.patch",
"merged_at": "2020-04-20T13:24:12"
} | true | |
601,780,534 | 7 | Fix issue 5: allow empty datasets | closed | [] | 2020-04-17T07:59:56 | 2020-04-29T09:27:13 | 2020-04-20T13:23:48 | jplu | https://github.com/huggingface/datasets/pull/7 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7",
"html_url": "https://github.com/huggingface/datasets/pull/7",
"diff_url": "https://github.com/huggingface/datasets/pull/7.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7.patch",
"merged_at": "2020-04-20T13:23:47"
} | true | |
600,330,836 | 6 | Error when citation is not given in the DatasetInfo | closed | [] | 2020-04-15T14:14:54 | 2020-04-29T09:23:22 | 2020-04-29T09:23:22 | The following error is raised when the `citation` parameter is missing when we instantiate a `DatasetInfo`:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/info.py", line 338, in __repr__
citation_pprint = _indent('"""{}"""'.format(self.... | jplu | https://github.com/huggingface/datasets/issues/6 | null | false |
600,295,889 | 5 | ValueError when a split is empty | closed | [] | 2020-04-15T13:25:13 | 2020-04-29T09:23:05 | 2020-04-29T09:23:05 | When a split is empty either TEST, VALIDATION or TRAIN I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/load.py", line 295, in load
ds = dbuilder.as_dataset(**as_dataset_kwargs)
File "/home/jplu/dev/jplu/data... | jplu | https://github.com/huggingface/datasets/issues/5 | null | false |
600,185,417 | 4 | [Feature] Keep the list of labels of a dataset as metadata | closed | [] | 2020-04-15T10:17:10 | 2020-07-08T16:59:46 | 2020-05-04T06:11:57 | It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata. | jplu | https://github.com/huggingface/datasets/issues/4 | null | false |
600,180,050 | 3 | [Feature] More dataset outputs | closed | [] | 2020-04-15T10:08:14 | 2020-05-04T06:12:27 | 2020-05-04T06:12:27 | Add the following dataset outputs:
- Spark
- Pandas | jplu | https://github.com/huggingface/datasets/issues/3 | null | false |
599,767,671 | 2 | Issue to read a local dataset | closed | [] | 2020-04-14T18:18:51 | 2020-05-11T18:55:23 | 2020-05-11T18:55:22 | Hello,
As proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following:
```python
import os
import csv
import nlp
class BbcConfig(nlp.BuilderConfig):
def __init__(self, **kwarg... | jplu | https://github.com/huggingface/datasets/issues/2 | null | false |
599,457,467 | 1 | changing nlp.bool to nlp.bool_ | closed | [] | 2020-04-14T10:18:02 | 2022-10-04T09:31:40 | 2020-04-14T12:01:40 | mariamabarham | https://github.com/huggingface/datasets/pull/1 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1",
"html_url": "https://github.com/huggingface/datasets/pull/1",
"diff_url": "https://github.com/huggingface/datasets/pull/1.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1.patch",
"merged_at": "2020-04-14T12:01:40"
} | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.