id int64 599M 3.48B | number int64 1 7.8k | title stringlengths 1 290 | state stringclasses 2
values | comments listlengths 0 30 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-10-05 06:37:50 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-10-05 10:32:43 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-10-01 13:56:03 ⌀ | body stringlengths 0 228k ⌀ | user stringlengths 3 26 | html_url stringlengths 46 51 | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,244,693,690 | 4,389 | Fix bug in gem dataset for wiki_auto_asset_turk config | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-23T07:19:49 | 2022-05-23T10:38:26 | 2022-05-23T10:29:55 | This PR fixes some URLs.
Fix #4386. | albertvillanova | https://github.com/huggingface/datasets/pull/4389 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4389",
"html_url": "https://github.com/huggingface/datasets/pull/4389",
"diff_url": "https://github.com/huggingface/datasets/pull/4389.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4389.patch",
"merged_at": "2022-05-23T10:29... | true |
1,244,645,158 | 4,388 | Set builder name from module instead of class | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-23T06:26:35 | 2022-05-25T05:24:43 | 2022-05-25T05:16:15 | Now the builder name attribute is set from from the builder class name.
This PR sets the builder name attribute from the module name instead. Some motivating reasons:
- The dataset ID is relevant and unique among all datasets and this is directly related to the repository name, i.e., the name of the directory conta... | albertvillanova | https://github.com/huggingface/datasets/pull/4388 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4388",
"html_url": "https://github.com/huggingface/datasets/pull/4388",
"diff_url": "https://github.com/huggingface/datasets/pull/4388.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4388.patch",
"merged_at": "2022-05-25T05:16... | true |
1,244,147,817 | 4,387 | device/google/accessory/adk2012 - Git at Google | closed | [] | 2022-05-22T04:57:19 | 2022-05-23T06:36:27 | 2022-05-23T06:36:27 | "git clone https://android.googlesource.com/device/google/accessory/adk2012"
https://android.googlesource.com/device/google/accessory/adk2012/#:~:text=git%20clone%20https%3A//android.googlesource.com/device/google/accessory/adk2012 | Aeckard45 | https://github.com/huggingface/datasets/issues/4387 | null | false |
1,243,965,532 | 4,386 | Bug for wiki_auto_asset_turk from GEM | closed | [
"Thanks for reporting, @StevenTang1998.\r\n\r\nI'm looking into it. ",
"Hi @StevenTang1998,\r\n\r\nWe have fixed the issue:\r\n- #4389\r\n\r\nThe fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by installing `datasets` from our GitHub repo:\r\n```\r\npip... | 2022-05-21T12:31:30 | 2022-05-24T05:55:52 | 2022-05-23T10:29:55 | ## Describe the bug
The script of wiki_auto_asset_turk for GEM may be out of date.
## Steps to reproduce the bug
```python
import datasets
datasets.load_dataset('gem', 'wiki_auto_asset_turk')
```
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/... | StevenTang1998 | https://github.com/huggingface/datasets/issues/4386 | null | false |
1,243,921,287 | 4,385 | Test dill | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I should point out that the hash will be the same if computed twice with the same code on the same version of dill (after adding huggingface's code that removes line numbers and file names, and sorts globals.) My changes in dill 0.3.... | 2022-05-21T08:57:43 | 2022-05-25T08:30:13 | 2022-05-25T08:21:48 | Regression test for future releases of `dill`.
Related to #4379. | albertvillanova | https://github.com/huggingface/datasets/pull/4385 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4385",
"html_url": "https://github.com/huggingface/datasets/pull/4385",
"diff_url": "https://github.com/huggingface/datasets/pull/4385.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4385.patch",
"merged_at": "2022-05-25T08:21... | true |
1,243,919,748 | 4,384 | Refactor download | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This looks like a breaking change no ?\r\nAlso could you explain why it would be better this way ?",
"The might be only there to help type checkers, but I am not too familiar with the code base to know for sure. I think this might ... | 2022-05-21T08:49:24 | 2022-05-25T10:52:02 | 2022-05-25T10:43:43 | This PR performs a refactoring of the download functionalities, by proposing a modular solution and moving them to their own package "download". Some motivating arguments:
- understandability: from a logical partitioning of the library, it makes sense to have all download functionalities grouped together instead of sc... | albertvillanova | https://github.com/huggingface/datasets/pull/4384 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4384",
"html_url": "https://github.com/huggingface/datasets/pull/4384",
"diff_url": "https://github.com/huggingface/datasets/pull/4384.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4384.patch",
"merged_at": "2022-05-25T10:43... | true |
1,243,856,981 | 4,383 | L | closed | [] | 2022-05-21T03:47:58 | 2022-05-21T19:20:13 | 2022-05-21T19:20:13 | ## Describe the L
L
## Expected L
A clear and concise lmll
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version: | AronCodes21 | https://github.com/huggingface/datasets/issues/4383 | null | false |
1,243,839,783 | 4,382 | First time trying | closed | [] | 2022-05-21T02:15:18 | 2022-05-21T19:20:44 | 2022-05-21T19:20:44 | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | Aeckard45 | https://github.com/huggingface/datasets/issues/4382 | null | false |
1,243,478,863 | 4,381 | Bug in caching 2 datasets both with the same builder class name | closed | [
"Hi @NouamaneTazi, thanks for reporting.\r\n\r\nPlease note that both datasets are cached in the same directory because their loading builder classes have the same name: `class MTOP(datasets.GeneratorBasedBuilder)`.\r\n\r\nYou should name their builder classes differently, e.g.:\r\n- `MtopDomain`\r\n- `MtopIntent`"... | 2022-05-20T18:18:03 | 2022-06-02T08:18:37 | 2022-05-25T05:16:15 | ## Describe the bug
The two datasets `mteb/mtop_intent` and `mteb/mtop_domain `use both the same cache folder `.cache/huggingface/datasets/mteb___mtop`. So if you first load `mteb/mtop_intent` then datasets will not load `mteb/mtop_domain`.
If you delete this cache folder and flip the order how you load the two datas... | NouamaneTazi | https://github.com/huggingface/datasets/issues/4381 | null | false |
1,243,183,054 | 4,380 | Pin dill | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-20T13:54:19 | 2022-06-13T10:03:52 | 2022-05-20T16:33:04 | Hotfix #4379.
CC: @sgugger | albertvillanova | https://github.com/huggingface/datasets/pull/4380 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4380",
"html_url": "https://github.com/huggingface/datasets/pull/4380",
"diff_url": "https://github.com/huggingface/datasets/pull/4380.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4380.patch",
"merged_at": "2022-05-20T16:33... | true |
1,243,175,854 | 4,379 | Latest dill release raises exception | closed | [
"Fixed by:\r\n- #4380 ",
"Just an additional insight, the latest dill (either 0.3.5 or 0.3.5.1) also broke the hashing/fingerprinting of any mapping function.\r\n\r\nFor example:\r\n```\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"rotten_tomatoes\")\r\nd.map(lambda x: x)\r\n```\r\n\r\nReturns th... | 2022-05-20T13:48:36 | 2022-05-21T15:53:26 | 2022-05-20T17:06:27 | ## Describe the bug
As reported by @sgugger, latest dill release is breaking things with Datasets.
```
______________ ExamplesTests.test_run_speech_recognition_seq2seq _______________
self = <multiprocess.pool.ApplyResult object at 0x7fa5981a1cd0>, timeout = None
def get(self, timeout=None):
s... | albertvillanova | https://github.com/huggingface/datasets/issues/4379 | null | false |
1,242,935,373 | 4,378 | Tidy up license metadata for google_wellformed_query, newspop, sick | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"& thank you!"
] | 2022-05-20T10:16:12 | 2022-05-24T13:50:23 | 2022-05-24T13:10:27 | Amend three licenses on datasets to fit naming convention (lower case, cc licenses include sub-version number). I think that's it - everything else on datasets looks great & super-searchable now! | leondz | https://github.com/huggingface/datasets/pull/4378 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4378",
"html_url": "https://github.com/huggingface/datasets/pull/4378",
"diff_url": "https://github.com/huggingface/datasets/pull/4378.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4378.patch",
"merged_at": "2022-05-24T13:10... | true |
1,242,746,186 | 4,377 | Fix checksum and bug in irc_disentangle dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-20T07:29:28 | 2022-05-20T09:34:36 | 2022-05-20T09:26:32 | There was a bug in filepath segment:
- wrong: `jkkummerfeld-irc-disentanglement-fd379e9`
- right: `jkkummerfeld-irc-disentanglement-35f0a40`
Also there was a bug in the checksum of the downloaded file.
This PR fixes these issues.
Fix partially #4376.
| albertvillanova | https://github.com/huggingface/datasets/pull/4377 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4377",
"html_url": "https://github.com/huggingface/datasets/pull/4377",
"diff_url": "https://github.com/huggingface/datasets/pull/4377.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4377.patch",
"merged_at": "2022-05-20T09:26... | true |
1,242,218,144 | 4,376 | irc_disentagle viewer error | closed | [
"DUPLICATED comment from https://github.com/huggingface/datasets/issues/3807:\r\n\r\nmy code:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"irc_disentangle\", download_mode=\"force_redownload\")\r\n```\r\nhowever, it produces the same error\r\n```\r\n[38](file:///Library/Frameworks/Pyt... | 2022-05-19T19:15:16 | 2023-01-12T16:56:13 | 2022-06-02T08:20:00 | the dataviewer shows this message for "ubuntu" - "train", "test", and "validation" splits:
```
Server error
Status code: 400
Exception: ValueError
Message: Cannot seek streaming HTTP file
```
it appears to give the same message for the "channel_two" data as well.
I get a Checksums error when usi... | labouz | https://github.com/huggingface/datasets/issues/4376 | null | false |
1,241,921,147 | 4,375 | Support DataLoader with num_workers > 0 in streaming mode | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Alright this is finally ready for review ! It's quite long I'm sorry, but it's not easy to disentangle everything ^^'\r\n\r\nThe main additions are in\r\n- src/datasets/formatting/dataset_wrappers/torch_iterable_dataset.py\r\n- src/d... | 2022-05-19T15:00:31 | 2022-07-04T16:05:14 | 2022-06-10T20:47:27 | ### Issue
It's currently not possible to properly stream a dataset using multiple `torch.utils.data.DataLoader` workers:
- the `TorchIterableDataset` can't be pickled and passed to the subprocesses: https://github.com/huggingface/datasets/issues/3950
- streaming extension is failing: https://github.com/huggingfa... | lhoestq | https://github.com/huggingface/datasets/pull/4375 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4375",
"html_url": "https://github.com/huggingface/datasets/pull/4375",
"diff_url": "https://github.com/huggingface/datasets/pull/4375.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4375.patch",
"merged_at": "2022-06-10T20:47... | true |
1,241,860,535 | 4,374 | extremely slow processing when using a custom dataset | closed | [
"Hi !\r\n\r\nMy guess is that some examples in your dataset are bigger than your RAM, and therefore loading them in RAM to pass them to `remove_non_indic_sentences` takes forever because it might use SWAP memory.\r\n\r\nMaybe several examples in your dataset are grouped together, can you check `len(lang_dataset[\"t... | 2022-05-19T14:18:05 | 2023-07-25T15:07:17 | 2023-07-25T15:07:16 | ## processing a custom dataset loaded as .txt file is extremely slow, compared to a dataset of similar volume from the hub
I have a large .txt file of 22 GB which i load into HF dataset
`lang_dataset = datasets.load_dataset("text", data_files="hi.txt")`
further i use a pre-processing function to clean the d... | StephennFernandes | https://github.com/huggingface/datasets/issues/4374 | null | false |
1,241,769,310 | 4,373 | Remove links in docs to old dataset viewer | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-19T13:24:39 | 2022-05-20T15:24:28 | 2022-05-20T15:16:05 | Remove the links in the docs to the no longer maintained dataset viewer. | mariosasko | https://github.com/huggingface/datasets/pull/4373 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4373",
"html_url": "https://github.com/huggingface/datasets/pull/4373",
"diff_url": "https://github.com/huggingface/datasets/pull/4373.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4373.patch",
"merged_at": "2022-05-20T15:16... | true |
1,241,703,826 | 4,372 | Check if dataset features match before push in `DatasetDict.push_to_hub` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-19T12:32:30 | 2022-05-20T15:23:36 | 2022-05-20T15:15:30 | Fix #4211 | mariosasko | https://github.com/huggingface/datasets/pull/4372 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4372",
"html_url": "https://github.com/huggingface/datasets/pull/4372",
"diff_url": "https://github.com/huggingface/datasets/pull/4372.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4372.patch",
"merged_at": "2022-05-20T15:15... | true |
1,241,500,906 | 4,371 | Add missing language tags for udhr dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-19T09:34:10 | 2022-06-08T12:03:24 | 2022-05-20T09:43:10 | Related to #4362. | albertvillanova | https://github.com/huggingface/datasets/pull/4371 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4371",
"html_url": "https://github.com/huggingface/datasets/pull/4371",
"diff_url": "https://github.com/huggingface/datasets/pull/4371.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4371.patch",
"merged_at": "2022-05-20T09:43... | true |
1,240,245,642 | 4,369 | Add redirect to dataset script in the repo structure page | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-18T17:05:33 | 2022-05-19T08:19:01 | 2022-05-19T08:10:51 | Following https://github.com/huggingface/hub-docs/pull/146 I added a redirection to the dataset scripts documentation in the repository structure page. | lhoestq | https://github.com/huggingface/datasets/pull/4369 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4369",
"html_url": "https://github.com/huggingface/datasets/pull/4369",
"diff_url": "https://github.com/huggingface/datasets/pull/4369.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4369.patch",
"merged_at": "2022-05-19T08:10... | true |
1,240,064,860 | 4,368 | Add long answer candidates to natural questions dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Once we have added `long_answer_candidates` maybe it would be worth to also add the missing `candidate_index` (inside `long_answer`). What do you think, @seirasto ?",
"Also note the \"Data Fields\" section in the README is missing ... | 2022-05-18T14:35:42 | 2022-07-26T20:30:41 | 2022-07-26T20:18:42 | This is a modification of the Natural Questions dataset to include missing information specifically related to long answer candidates. (See here: https://github.com/google-research-datasets/natural-questions#long-answer-candidates). This information is important to ensure consistent comparison with prior work. It does ... | seirasto | https://github.com/huggingface/datasets/pull/4368 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4368",
"html_url": "https://github.com/huggingface/datasets/pull/4368",
"diff_url": "https://github.com/huggingface/datasets/pull/4368.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4368.patch",
"merged_at": "2022-07-26T20:18... | true |
1,240,011,602 | 4,367 | Remove config names as yaml keys | closed | [
"I included the change from https://github.com/huggingface/datasets/pull/4302 directly in this PR, this way the datasets will be updated right away in the CI (the CI is only triggered when a dataset card is changed)",
"_The documentation is not available anymore as the PR was closed or merged._",
"Alright it's ... | 2022-05-18T13:59:24 | 2022-05-20T09:35:26 | 2022-05-20T09:27:19 | Many datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys.
I fix this, I removed the tags separations per config name completely, and have a single flat YAML for all configurations. Dataset search doesn't use this info anywa... | lhoestq | https://github.com/huggingface/datasets/pull/4367 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4367",
"html_url": "https://github.com/huggingface/datasets/pull/4367",
"diff_url": "https://github.com/huggingface/datasets/pull/4367.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4367.patch",
"merged_at": "2022-05-20T09:27... | true |
1,239,534,165 | 4,366 | TypeError: __init__() missing 1 required positional argument: 'scheme' | closed | [
"Duplicate of:\r\n- #3956\r\n\r\nI think you should report that issue to `elasticsearch` library: https://github.com/elastic/elasticsearch-py"
] | 2022-05-18T07:17:29 | 2022-05-18T16:36:22 | 2022-05-18T16:36:21 | "name" : "node-1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "",
"version" : {
"number" : "7.5.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "",
"build_date" : "2019-11-26T01:06:52.518245Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0... | jffgitt | https://github.com/huggingface/datasets/issues/4366 | null | false |
1,239,109,943 | 4,365 | Remove dots in config names | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Closing in favor of https://github.com/huggingface/datasets/pull/4367"
] | 2022-05-17T20:12:57 | 2023-09-24T10:02:53 | 2022-05-18T13:59:41 | 20+ datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys.
This is related to https://github.com/huggingface/datasets/pull/2362 (internal https://github.com/huggingface/moon-landing/issues/946).
Also removing the dots in th... | lhoestq | https://github.com/huggingface/datasets/pull/4365 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4365",
"html_url": "https://github.com/huggingface/datasets/pull/4365",
"diff_url": "https://github.com/huggingface/datasets/pull/4365.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4365.patch",
"merged_at": null
} | true |
1,238,976,106 | 4,364 | Support complex feature types as `features` in packaged loaders | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-17T17:53:23 | 2022-05-31T12:26:23 | 2022-05-31T12:16:32 | This PR adds `table_cast` to the packaged loaders to fix casting to the `Image`/`Audio`, `ArrayND` and `ClassLabel` types. If these types are not present in the `builder.config.features` dictionary, the built-in `pa.Table.cast` is used for better performance. Additionally, this PR adds `cast_storage` to `ClassLabel` to... | mariosasko | https://github.com/huggingface/datasets/pull/4364 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4364",
"html_url": "https://github.com/huggingface/datasets/pull/4364",
"diff_url": "https://github.com/huggingface/datasets/pull/4364.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4364.patch",
"merged_at": "2022-05-31T12:16... | true |
1,238,897,652 | 4,363 | The dataset preview is not available for this split. | closed | [
"Hi! A dataset has to be streamable to work with the viewer. I did a quick test, and yours is, so this might be a bug in the viewer. cc @severo \r\n",
"Looking at it. The message is now:\r\n\r\n```\r\nMessage: cannot cache function '__shear_dense': no locator available for file '/src/services/worker/.venv/... | 2022-05-17T16:34:43 | 2022-06-08T12:32:10 | 2022-06-08T09:26:56 | I have uploaded the corpus developed by our lab in the speech domain to huggingface [datasets](https://huggingface.co/datasets/Roh/ryanspeech). You can read about the companion paper accepted in interspeech 2021 [here](https://arxiv.org/abs/2106.08468). The dataset works fine but I can't make the dataset preview work. ... | roholazandie | https://github.com/huggingface/datasets/issues/4363 | null | false |
1,238,680,112 | 4,362 | Update dataset_infos for UDHN/udhr dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for contributing @leondz.\r\n\r\nThe checksums of the files have changed because more languages have been added:\r\n- the new language codes need to be added to the dataset card (README file)\r\n- I think the dataset version n... | 2022-05-17T13:52:59 | 2022-06-08T19:20:11 | 2022-06-08T19:11:21 | Checksum update to `udhr` for issue #4361 | leondz | https://github.com/huggingface/datasets/pull/4362 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4362",
"html_url": "https://github.com/huggingface/datasets/pull/4362",
"diff_url": "https://github.com/huggingface/datasets/pull/4362.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4362.patch",
"merged_at": "2022-06-08T19:11... | true |
1,238,671,931 | 4,361 | `udhr` doesn't load, dataset checksum mismatch | closed | [] | 2022-05-17T13:47:09 | 2022-06-08T19:11:21 | 2022-06-08T19:11:21 | ## Describe the bug
Loading `udhr` fails due to a checksum mismatch for some source files. Looks like both of the source files on unicode.org have changed:
size + checksum in datasets repo:
```
(hfdev) leon@blade:~/datasets/datasets/udhr$ jq .default.download_checksums < dataset_infos.json
{
"https://unicode... | leondz | https://github.com/huggingface/datasets/issues/4361 | null | false |
1,237,239,096 | 4,360 | Fix example in opus_ubuntu, Add license info | closed | [
"CI seems to fail due to languages incorrectly being flagged as invalid, I guess that's related to the currently-broken bcp47 validation (see #4304)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-16T14:22:28 | 2022-06-01T13:06:07 | 2022-06-01T12:57:09 | This PR
* fixes a typo in the example for the`opus_ubuntu` dataset where it's mistakenly referred to as `ubuntu`
* adds the declared license info for this corpus' origin
* adds an example instance
* updates the data origin type | leondz | https://github.com/huggingface/datasets/pull/4360 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4360",
"html_url": "https://github.com/huggingface/datasets/pull/4360",
"diff_url": "https://github.com/huggingface/datasets/pull/4360.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4360.patch",
"merged_at": "2022-06-01T12:57... | true |
1,237,149,578 | 4,359 | Fix Version equality | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-16T13:19:26 | 2022-05-24T16:25:37 | 2022-05-24T16:17:14 | I think `Version` equality should align with other similar cases in Python, like:
```python
In [1]: "a" == 5, "a" == None
Out[1]: (False, False)
In [2]: "a" != 5, "a" != None
Out[2]: (True, True)
```
With this PR, we will get:
```python
In [3]: Version("1.0.0") == 5, Version("1.0.0") == None
Out[3]: (Fals... | albertvillanova | https://github.com/huggingface/datasets/pull/4359 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4359",
"html_url": "https://github.com/huggingface/datasets/pull/4359",
"diff_url": "https://github.com/huggingface/datasets/pull/4359.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4359.patch",
"merged_at": "2022-05-24T16:17... | true |
1,237,147,692 | 4,358 | Missing dataset tags and sections in some dataset cards | open | [
"@lhoestq I can take this issue. Please can you point out to me where I can find the other positional arguments?",
"Hi @RohitRathore1 :)\r\n\r\nYou can find all the YAML tags in the tagging app here: https://hf.co/spaces/huggingface/datasets-tagging). They're all passed as arguments to a DatasetMetadata object us... | 2022-05-16T13:18:16 | 2022-05-30T15:36:52 | null | Summary of CircleCI errors for different dataset metadata:
- **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
- **Conllpp**: expected some content in section `Citati... | sashavor | https://github.com/huggingface/datasets/issues/4358 | null | false |
1,237,037,069 | 4,357 | Fix warning in push_to_hub | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-16T11:50:17 | 2022-05-16T15:18:49 | 2022-05-16T15:10:41 | Fix warning:
```
FutureWarning: 'shard_size' was renamed to 'max_shard_size' in version 2.1.1 and will be removed in 2.4.0.
``` | albertvillanova | https://github.com/huggingface/datasets/pull/4357 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4357",
"html_url": "https://github.com/huggingface/datasets/pull/4357",
"diff_url": "https://github.com/huggingface/datasets/pull/4357.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4357.patch",
"merged_at": "2022-05-16T15:10... | true |
1,236,846,308 | 4,356 | Fix dataset builder default version | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This PR requires one of these other PRs being merged first:\r\n- #4359 \r\n- huggingface/doc-builder#211"
] | 2022-05-16T09:05:10 | 2022-05-30T13:56:58 | 2022-05-30T13:47:54 | Currently, when using a custom config (subclass of `BuilderConfig`), default version set at the builder level is ignored: we must set default version in the custom config class.
However, when loading a dataset with `config_kwargs` (for a configuration not present in `BUILDER_CONFIGS`), the default version set in the... | albertvillanova | https://github.com/huggingface/datasets/pull/4356 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4356",
"html_url": "https://github.com/huggingface/datasets/pull/4356",
"diff_url": "https://github.com/huggingface/datasets/pull/4356.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4356.patch",
"merged_at": "2022-05-30T13:47... | true |
1,236,797,490 | 4,355 | Fix warning in upload_file | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-16T08:21:31 | 2022-05-16T11:28:02 | 2022-05-16T11:19:57 | Fix warning:
```
FutureWarning: Pass path_or_fileobj='...' as keyword args. From version 0.7 passing these as positional arguments will result in an error
``` | albertvillanova | https://github.com/huggingface/datasets/pull/4355 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4355",
"html_url": "https://github.com/huggingface/datasets/pull/4355",
"diff_url": "https://github.com/huggingface/datasets/pull/4355.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4355.patch",
"merged_at": "2022-05-16T11:19... | true |
1,236,404,383 | 4,354 | Problems with WMT dataset | closed | [
"Hi! Yes, the docs are outdated. Expect this to be fixed soon. \r\n\r\nIn the meantime, you can try to fix the issue yourself.\r\n\r\nThese are the configs/language pairs supported by `wmt15` from which you can choose:\r\n* `cs-en` (Czech - English)\r\n* `de-en` (German - English)\r\n* `fi-en` (Finnish- English)\r\... | 2022-05-15T20:58:26 | 2022-07-11T14:54:02 | 2022-07-11T14:54:01 | ## Describe the bug
I am trying to load WMT15 dataset and to define which data-sources to use for train/validation/test splits, but unfortunately it seems that the official documentation at [https://huggingface.co/datasets/wmt15#:~:text=Versions%20exists%20for,wmt_translate%22%2C%20config%3Dconfig)](https://huggingfac... | eldarkurtic | https://github.com/huggingface/datasets/issues/4354 | null | false |
1,236,092,176 | 4,353 | Don't strip proceeding hyphen | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-14T18:25:29 | 2022-05-16T18:51:38 | 2022-05-16T13:52:11 | Closes #4320. | JohnGiorgi | https://github.com/huggingface/datasets/pull/4353 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4353",
"html_url": "https://github.com/huggingface/datasets/pull/4353",
"diff_url": "https://github.com/huggingface/datasets/pull/4353.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4353.patch",
"merged_at": "2022-05-16T13:52... | true |
1,236,086,170 | 4,352 | When using `dataset.map()` if passed `Features` types do not match what is returned from the mapped function, execution does not except in an obvious way | open | [
"Hi ! Thanks for reporting :) `datasets` usually returns a `pa.lib.ArrowInvalid` error if the feature types don't match.\r\n\r\nIt would be awesome if we had a way to reproduce the `OverflowError` in this case, to better understand what happened and be able to provide the best error message"
] | 2022-05-14T17:55:15 | 2022-05-16T15:09:17 | null | ## Describe the bug
Recently I was trying to using `.map()` to preprocess a dataset. I defined the expected Features and passed them into `.map()` like `dataset.map(preprocess_data, features=features)`. My expected `Features` keys matched what came out of `preprocess_data`, but the types i had defined for them did not... | plamb-viso | https://github.com/huggingface/datasets/issues/4352 | null | false |
1,235,950,209 | 4,351 | Add optional progress bar for .save_to_disk(..) and .load_from_disk(..) when working with remote filesystems | closed | [
"Hi! I like this idea. For consistency with `load_dataset`, we can use `fsspec`'s `TqdmCallback` in `.load_from_disk` to monitor the number of bytes downloaded, and in `.save_to_disk`, we can track the number of saved shards for consistency with `push_to_hub` (after we implement https://github.com/huggingface/data... | 2022-05-14T11:30:42 | 2022-12-14T18:22:59 | 2022-12-14T18:22:59 | **Is your feature request related to a problem? Please describe.**
When working with large datasets stored on remote filesystems(such as s3), the process of uploading a dataset could take really long time. For instance: I was uploading a re-processed version of wmt17 en-ru to my s3 bucket and it took like 35 minutes(a... | Rexhaif | https://github.com/huggingface/datasets/issues/4351 | null | false |
1,235,505,104 | 4,350 | Add a new metric: CTC_Consistency | closed | [
"Thanks for your contribution, @YEdenZ.\r\n\r\nPlease note that our old `metrics` module is in the process of being incorporated to a separate library called `evaluate`: https://github.com/huggingface/evaluate\r\n\r\nTherefore, I would ask you to transfer your PR to that repository. Thank you."
] | 2022-05-13T17:31:19 | 2022-05-19T10:23:04 | 2022-05-19T10:23:03 | Add CTC_Consistency metric
Do I also need to modify the `test_metric_common.py` file to make it run on test? | YEdenZ | https://github.com/huggingface/datasets/pull/4350 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4350",
"html_url": "https://github.com/huggingface/datasets/pull/4350",
"diff_url": "https://github.com/huggingface/datasets/pull/4350.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4350.patch",
"merged_at": null
} | true |
1,235,474,765 | 4,349 | Dataset.map()'s fails at any value of parameter writer_batch_size | closed | [
"Note that this same issue occurs even if i preprocess with the more default way of tokenizing that uses LayoutLMv2Processor's internal OCR:\r\n\r\n```python\r\n feature_extractor = LayoutLMv2FeatureExtractor()\r\n tokenizer = LayoutLMv2Tokenizer.from_pretrained(\"microsoft/layoutlmv2-base-uncased\")\... | 2022-05-13T16:55:12 | 2022-06-02T12:51:11 | 2022-05-14T15:08:08 | ## Describe the bug
If the the value of `writer_batch_size` is less than the total number of instances in the dataset it will fail at that same number of instances. If it is greater than the total number of instances, it fails on the last instance.
Context:
I am attempting to fine-tune a pre-trained HuggingFace tr... | plamb-viso | https://github.com/huggingface/datasets/issues/4349 | null | false |
1,235,432,976 | 4,348 | `inspect` functions can't fetch dataset script from the Hub | closed | [
"Hi, thanks for reporting! `git bisect` points to #2986 as the PR that introduced the bug. Since then, there have been some additional changes to the loading logic, and in the current state, `force_local_path` (set via `local_path`) forbids pulling a script from the internet instead of downloading it: https://githu... | 2022-05-13T16:08:26 | 2022-06-09T10:26:06 | 2022-06-09T10:26:06 | The `inspect_dataset` and `inspect_metric` functions are unable to retrieve a dataset or metric script from the Hub and store it locally at the specified `local_path`:
```py
>>> from datasets import inspect_dataset
>>> inspect_dataset('rotten_tomatoes', local_path='path/to/my/local/folder')
FileNotFoundError: C... | stevhliu | https://github.com/huggingface/datasets/issues/4348 | null | false |
1,235,318,064 | 4,347 | Support remote cache_dir | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq thanks for your review.\r\n\r\nPlease note that `xjoin` cannot be used in this context, as it always returns a POSIX path string and this is not suitable on Windows machines.",
"<s>`xjoin` returns windows paths (not posix)... | 2022-05-13T14:26:35 | 2022-05-25T16:35:23 | 2022-05-25T16:27:03 | This PR implements complete support for remote `cache_dir`. Before, the support was just partial.
This is useful to create datasets using Apache Beam (parallel data processing) builder with `cache_dir` in a remote bucket, e.g., for Wikipedia dataset. | albertvillanova | https://github.com/huggingface/datasets/pull/4347 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4347",
"html_url": "https://github.com/huggingface/datasets/pull/4347",
"diff_url": "https://github.com/huggingface/datasets/pull/4347.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4347.patch",
"merged_at": "2022-05-25T16:27... | true |
1,235,067,062 | 4,346 | GH Action to build documentation never ends | closed | [] | 2022-05-13T10:44:44 | 2022-05-13T11:22:00 | 2022-05-13T11:22:00 | ## Describe the bug
See: https://github.com/huggingface/datasets/runs/6418035586?check_suite_focus=true
I finally forced the cancel of the workflow. | albertvillanova | https://github.com/huggingface/datasets/issues/4346 | null | false |
1,235,062,787 | 4,345 | Fix never ending GH Action to build documentation | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-13T10:40:10 | 2022-05-13T11:29:43 | 2022-05-13T11:22:00 | There was an unclosed code block introduced by:
- #4313
https://github.com/huggingface/datasets/pull/4313/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R538
This causes the "Make documentation" step in the "Build documentation" workflow to never finish.
- I think this issue should... | albertvillanova | https://github.com/huggingface/datasets/pull/4345 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4345",
"html_url": "https://github.com/huggingface/datasets/pull/4345",
"diff_url": "https://github.com/huggingface/datasets/pull/4345.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4345.patch",
"merged_at": "2022-05-13T11:22... | true |
1,234,882,542 | 4,344 | Fix docstring in DatasetDict::shuffle | closed | [] | 2022-05-13T08:06:00 | 2022-05-25T09:23:43 | 2022-05-24T15:35:21 | I think due to #1626, the docstring contained this error ever since `seed` was added. | felixdivo | https://github.com/huggingface/datasets/pull/4344 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4344",
"html_url": "https://github.com/huggingface/datasets/pull/4344",
"diff_url": "https://github.com/huggingface/datasets/pull/4344.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4344.patch",
"merged_at": "2022-05-24T15:35... | true |
1,234,864,168 | 4,343 | Metrics documentation is not accessible in the datasets doc UI | closed | [
"Hey @fxmarty :) Yes we are working on showing the docs of all the metrics on the Hugging face website. If you want to follow the advancements you can check the [evaluate](https://github.com/huggingface/evaluate) repository cc @lvwerra @sashavor "
] | 2022-05-13T07:46:30 | 2022-06-03T08:50:25 | 2022-06-03T08:50:25 | **Is your feature request related to a problem? Please describe.**
Search for a metric name like "seqeval" yields no results on https://huggingface.co/docs/datasets/master/en/index . One needs to go look in `datasets/metrics/README.md` to find the doc. Even in the `README.md`, it can be hard to understand what the met... | fxmarty | https://github.com/huggingface/datasets/issues/4343 | null | false |
1,234,743,765 | 4,342 | Fix failing CI on Windows for sari and wiki_split metrics | closed | [] | 2022-05-13T05:03:38 | 2022-05-13T05:47:42 | 2022-05-13T05:47:42 | This PR adds `sacremoses` as explicit tests dependency (required by sari and wiki_split metrics).
Before, this library was installed as a third-party dependency, but this is no longer the case for Windows.
Fix #4341. | albertvillanova | https://github.com/huggingface/datasets/pull/4342 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4342",
"html_url": "https://github.com/huggingface/datasets/pull/4342",
"diff_url": "https://github.com/huggingface/datasets/pull/4342.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4342.patch",
"merged_at": "2022-05-13T05:47... | true |
1,234,739,703 | 4,341 | Failing CI on Windows for sari and wiki_split metrics | closed | [] | 2022-05-13T04:55:17 | 2022-05-13T05:47:41 | 2022-05-13T05:47:41 | ## Describe the bug
Our CI is failing from yesterday on Windows for metrics: sari and wiki_split
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_sari - ...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_wiki_split
```
See: https://app.circleci.com/pipelines/githu... | albertvillanova | https://github.com/huggingface/datasets/issues/4341 | null | false |
1,234,671,025 | 4,340 | Fix irc_disentangle dataset script | closed | [
"Thanks ! This has been fixed in https://github.com/huggingface/datasets/pull/4377, we can close this PR"
] | 2022-05-13T02:37:57 | 2022-05-24T15:37:30 | 2022-05-24T15:37:29 | updated extracted dataset's repo's latest commit hash (included in tarball's name), and updated the related data_infos.json | i-am-pad | https://github.com/huggingface/datasets/pull/4340 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4340",
"html_url": "https://github.com/huggingface/datasets/pull/4340",
"diff_url": "https://github.com/huggingface/datasets/pull/4340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4340.patch",
"merged_at": null
} | true |
1,234,496,289 | 4,339 | Dataset loader for the MSLR2022 shared task | closed | [
"I think the underlying issue is in https://github.com/huggingface/datasets/blob/c0ed6fdc29675b3565b01b77fde5ab5d9d8b60ec/src/datasets/commands/dummy_data.py#L124 - where `CSV`s are considered to be in the same class of file as text, jsonl, and tsv.\r\n\r\nI think this is an error because CSVs can have newlines wit... | 2022-05-12T21:23:41 | 2022-07-18T17:19:27 | 2022-07-18T16:58:34 | This PR adds a dataset loader for the [MSLR2022 Shared Task](https://github.com/allenai/mslr-shared-task). Both the MS^2 and Cochrane datasets can be loaded with this dataloader:
```python
from datasets import load_dataset
ms2 = load_dataset("mslr2022", "ms2")
cochrane = load_dataset("mslr2022", "cochrane")
``... | JohnGiorgi | https://github.com/huggingface/datasets/pull/4339 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4339",
"html_url": "https://github.com/huggingface/datasets/pull/4339",
"diff_url": "https://github.com/huggingface/datasets/pull/4339.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4339.patch",
"merged_at": null
} | true |
1,234,478,851 | 4,338 | Eval metadata Batch 4: Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full | closed | [
"Summary of CircleCI errors:\r\n\r\n- **XSum**: missing 6 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', and 'source_datasets'\r\n- **Yelp_polarity**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', ... | 2022-05-12T21:02:08 | 2022-05-16T15:51:02 | 2022-05-16T15:42:59 | Adding evaluation metadata for:
- Tweet Eval
- Tweets Hate Speech Detection
- VCTK
- Weibo NER
- Wisesight Sentiment
- XSum
- Yahoo Answers Topics
- Yelp Polarity
- Yelp Review Full | sashavor | https://github.com/huggingface/datasets/pull/4338 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4338",
"html_url": "https://github.com/huggingface/datasets/pull/4338",
"diff_url": "https://github.com/huggingface/datasets/pull/4338.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4338.patch",
"merged_at": "2022-05-16T15:42... | true |
1,234,470,083 | 4,337 | Eval metadata batch 3: Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR | closed | [
"Summary of CircleCI errors:\r\n\r\n- **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sms_spam**: `Data Instances` and`Data Splits` are empty.... | 2022-05-12T20:52:02 | 2022-05-16T16:26:19 | 2022-05-16T16:18:30 | Adding evaluation metadata for:
- Reddit
- Rotten Tomatoes
- SemEval 2010
- Sentiment 140
- SMS Spam
- Snips
- SQuAD
- SQuAD v2
- Timit ASR | sashavor | https://github.com/huggingface/datasets/pull/4337 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4337",
"html_url": "https://github.com/huggingface/datasets/pull/4337",
"diff_url": "https://github.com/huggingface/datasets/pull/4337.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4337.patch",
"merged_at": "2022-05-16T16:18... | true |
1,234,446,174 | 4,336 | Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, Poem Sentiment | closed | [
"Summary of CircleCI errors:\r\n- **Jjigsaw_toxicity_pred**: `Citation Information` but it is empty.\r\n- **LIAR** : `Data Instances`,`Data Fields`, `Data Splits`, `Citation Information` are empty.\r\n- **MSRA NER** : Dataset Summary`, `Data Instances`, `Data Fields`, `Data Splits`, `Citation Information` are ... | 2022-05-12T20:24:45 | 2022-05-16T16:25:00 | 2022-05-16T16:24:59 | Adding evaluation metadata for :
- Health Fact
- Jigsaw Toxicity
- LIAR
- LJ Speech
- MSRA NER
- Multi News
- NCBI Diseas
- Poem Sentiment | sashavor | https://github.com/huggingface/datasets/pull/4336 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4336",
"html_url": "https://github.com/huggingface/datasets/pull/4336",
"diff_url": "https://github.com/huggingface/datasets/pull/4336.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4336.patch",
"merged_at": "2022-05-16T16:24... | true |
1,234,157,123 | 4,335 | Eval metadata batch 1: BillSum, CoNLL2003, CoNLLPP, CUAD, Emotion, GigaWord, GLUE, Hate Speech 18, Hate Speech | closed | [
"Summary of CircleCI errors:\r\n- **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **Conllpp**: expected some content in section `Citation Information` but it i... | 2022-05-12T15:28:16 | 2022-05-16T16:31:10 | 2022-05-16T16:23:09 | Adding evaluation metadata for:
- BillSum
- CoNLL2003
- CoNLLPP
- CUAD
- Emotion
- GigaWord
- GLUE
- Hate Speech 18
- Hate Speech Offensive | sashavor | https://github.com/huggingface/datasets/pull/4335 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4335",
"html_url": "https://github.com/huggingface/datasets/pull/4335",
"diff_url": "https://github.com/huggingface/datasets/pull/4335.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4335.patch",
"merged_at": "2022-05-16T16:23... | true |
1,234,103,477 | 4,334 | Adding eval metadata for billsum | closed | [] | 2022-05-12T14:49:08 | 2023-09-24T10:02:46 | 2022-05-12T14:49:24 | Adding eval metadata for billsum | sashavor | https://github.com/huggingface/datasets/pull/4334 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4334",
"html_url": "https://github.com/huggingface/datasets/pull/4334",
"diff_url": "https://github.com/huggingface/datasets/pull/4334.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4334.patch",
"merged_at": null
} | true |
1,234,038,705 | 4,333 | Adding eval metadata for Banking 77 | closed | [
"@lhoestq , Circle CI is giving me an error, saying that ['extended'] is a key that shouldn't be in the dataset metadata, but it was there before my modification (so I don't want to remove it)"
] | 2022-05-12T14:05:05 | 2022-05-12T21:03:32 | 2022-05-12T21:03:31 | Adding eval metadata for Banking 77 | sashavor | https://github.com/huggingface/datasets/pull/4333 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4333",
"html_url": "https://github.com/huggingface/datasets/pull/4333",
"diff_url": "https://github.com/huggingface/datasets/pull/4333.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4333.patch",
"merged_at": "2022-05-12T21:03... | true |
1,234,021,188 | 4,332 | Adding eval metadata for arabic speech corpus | closed | [] | 2022-05-12T13:51:38 | 2022-05-12T21:03:21 | 2022-05-12T21:03:20 | Adding eval metadata for arabic speech corpus | sashavor | https://github.com/huggingface/datasets/pull/4332 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4332",
"html_url": "https://github.com/huggingface/datasets/pull/4332",
"diff_url": "https://github.com/huggingface/datasets/pull/4332.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4332.patch",
"merged_at": "2022-05-12T21:03... | true |
1,234,016,110 | 4,331 | Adding eval metadata to Amazon Polarity | closed | [] | 2022-05-12T13:47:59 | 2022-05-12T21:03:14 | 2022-05-12T21:03:13 | Adding eval metadata to Amazon Polarity | sashavor | https://github.com/huggingface/datasets/pull/4331 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4331",
"html_url": "https://github.com/huggingface/datasets/pull/4331",
"diff_url": "https://github.com/huggingface/datasets/pull/4331.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4331.patch",
"merged_at": "2022-05-12T21:03... | true |
1,233,992,681 | 4,330 | Adding eval metadata to Allociné dataset | closed | [] | 2022-05-12T13:31:39 | 2022-05-12T21:03:05 | 2022-05-12T21:03:05 | Adding eval metadata to Allociné dataset | sashavor | https://github.com/huggingface/datasets/pull/4330 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4330",
"html_url": "https://github.com/huggingface/datasets/pull/4330",
"diff_url": "https://github.com/huggingface/datasets/pull/4330.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4330.patch",
"merged_at": "2022-05-12T21:03... | true |
1,233,991,207 | 4,329 | Adding eval metadata for AG News | closed | [] | 2022-05-12T13:30:32 | 2022-05-12T21:02:41 | 2022-05-12T21:02:40 | Adding eval metadata for AG News | sashavor | https://github.com/huggingface/datasets/pull/4329 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4329",
"html_url": "https://github.com/huggingface/datasets/pull/4329",
"diff_url": "https://github.com/huggingface/datasets/pull/4329.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4329.patch",
"merged_at": "2022-05-12T21:02... | true |
1,233,856,690 | 4,328 | Fix and clean Apache Beam functionality | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-12T11:41:07 | 2022-05-24T13:43:11 | 2022-05-24T13:34:32 | null | albertvillanova | https://github.com/huggingface/datasets/pull/4328 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4328",
"html_url": "https://github.com/huggingface/datasets/pull/4328",
"diff_url": "https://github.com/huggingface/datasets/pull/4328.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4328.patch",
"merged_at": "2022-05-24T13:34... | true |
1,233,840,020 | 4,327 | `wikipedia` pre-processed datasets | closed | [
"Hi @vpj, thanks for reporting.\r\n\r\nI'm sorry, but I can't reproduce your bug: I load \"20220301.simple\"in 9 seconds:\r\n```shell\r\ntime python -c \"from datasets import load_dataset; load_dataset('wikipedia', '20220301.simple')\"\r\n\r\nDownloading and preparing dataset wikipedia/20220301.simple (download: 22... | 2022-05-12T11:25:42 | 2022-08-31T08:26:57 | 2022-08-31T08:26:57 | ## Describe the bug
[Wikipedia](https://huggingface.co/datasets/wikipedia) dataset readme says that certain subsets are preprocessed. However it seems like they are not available. When I try to load them it takes a really long time, and it seems like it's processing them.
## Steps to reproduce the bug
```python
f... | vpj | https://github.com/huggingface/datasets/issues/4327 | null | false |
1,233,818,489 | 4,326 | Fix type hint and documentation for `new_fingerprint` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-12T11:05:08 | 2022-06-01T13:04:45 | 2022-06-01T12:56:18 | Currently, there are no type hints nor `Optional` for the argument `new_fingerprint` in several methods of `datasets.arrow_dataset.Dataset`.
There was some documentation missing as well.
Note that pylance is happy with the type hints, but pyright does not detect that `new_fingerprint` is set within the decorator.... | fxmarty | https://github.com/huggingface/datasets/pull/4326 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4326",
"html_url": "https://github.com/huggingface/datasets/pull/4326",
"diff_url": "https://github.com/huggingface/datasets/pull/4326.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4326.patch",
"merged_at": "2022-06-01T12:56... | true |
1,233,812,191 | 4,325 | Dataset Viewer issue for strombergnlp/offenseval_2020, strombergnlp/polstance | closed | [
"Not sure if it's related... I was going to raise an issue for https://huggingface.co/datasets/domenicrosati/TruthfulQA which also has the same issue... https://huggingface.co/datasets/domenicrosati/TruthfulQA/viewer/domenicrosati--TruthfulQA/train \r\n\r\n",
"Yes, it's related. The backend behind the dataset vie... | 2022-05-12T10:59:08 | 2022-05-13T10:57:15 | 2022-05-13T10:57:02 | ### Link
https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train
### Description
The viewer isn't running for these two datasets. I left it overnight because a wait sometimes helps things get loaded, and the error messages have all gone, but the datasets are still turning up blank in viewer. May... | leondz | https://github.com/huggingface/datasets/issues/4325 | null | false |
1,233,780,870 | 4,324 | Support >1 PWC dataset per dataset card | open | [
"Hi @leondz, I agree it would be nice. We'll see what we can do ;)"
] | 2022-05-12T10:29:07 | 2022-05-13T11:25:29 | null | **Is your feature request related to a problem? Please describe.**
Some datasets cover more than one dataset on PapersWithCode. For example, the OffensEval 2020 challenge involved five languages, and there's one dataset to cover all five datasets, [`strombergnlp/offenseval_2020`](https://huggingface.co/datasets/stromb... | leondz | https://github.com/huggingface/datasets/issues/4324 | null | false |
1,233,634,928 | 4,323 | Audio can not find value["bytes"] | closed | [
"\r\n\r\nthat is reason my bytes`s empty\r\nbut i have some confused why path prior is higher than bytes?\r\n\r\nif you can make bytes in _generate_examples , you don`t have to make bytes to path?\r\nbecau... | 2022-05-12T08:31:58 | 2022-07-07T13:16:08 | 2022-07-07T13:16:08 | ## Describe the bug
I wrote down _generate_examples like:

but where is the bytes?

## ... | YooSungHyun | https://github.com/huggingface/datasets/issues/4323 | null | false |
1,233,596,947 | 4,322 | Added stratify option to train_test_split function. | closed | [
"> Nice thank you ! This will be super useful :)\r\n> \r\n> Could you also add some tests in test_arrow_dataset.py and add an example of usage in the `Example:` section of the `train_test_split` docstring ?\r\n\r\nI will try to do it, is there any documentation for adding test cases? I have never done it before.",
... | 2022-05-12T08:00:31 | 2022-11-22T14:53:55 | 2022-05-25T20:43:51 | This PR adds `stratify` option to `train_test_split` method. I took reference from scikit-learn's `StratifiedShuffleSplit` class for implementing stratified split and integrated the changes as were suggested by @lhoestq.
It fixes #3452.
@lhoestq Please review and let me know, if any changes are required.
| nandwalritik | https://github.com/huggingface/datasets/pull/4322 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4322",
"html_url": "https://github.com/huggingface/datasets/pull/4322",
"diff_url": "https://github.com/huggingface/datasets/pull/4322.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4322.patch",
"merged_at": "2022-05-25T20:43... | true |
1,233,273,351 | 4,321 | Adding dataset enwik8 | closed | [
"@lhoestq Thank you for the great feedback! Looks like all tests are passing now :)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T23:25:02 | 2022-06-01T14:27:30 | 2022-06-01T14:04:06 | Because I regularly work with enwik8, I would like to contribute the dataset loader 🤗 | HallerPatrick | https://github.com/huggingface/datasets/pull/4321 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4321",
"html_url": "https://github.com/huggingface/datasets/pull/4321",
"diff_url": "https://github.com/huggingface/datasets/pull/4321.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4321.patch",
"merged_at": "2022-06-01T14:04... | true |
1,233,208,864 | 4,320 | Multi-news dataset loader attempts to strip wrong character from beginning of summaries | closed | [
"Hi ! Thanks for reporting :)\r\n\r\nThis dataset was simply converted from [tensorflow datasets](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/summarization/multi_news.py)\r\n\r\nI think we can just remove the `.strip(\"- \")` and keep this character",
"Cool! I made a PR."
] | 2022-05-11T21:36:41 | 2022-05-16T13:52:10 | 2022-05-16T13:52:10 | ## Describe the bug
The `multi_news.py` data loader has [a line which attempts to strip `"- "` from the beginning of summaries](https://github.com/huggingface/datasets/blob/aa743886221d76afb409d263e1b136e7a71fe2b4/datasets/multi_news/multi_news.py#L97). The actual character in the multi-news dataset, however, is `"–... | JohnGiorgi | https://github.com/huggingface/datasets/issues/4320 | null | false |
1,232,982,023 | 4,319 | Adding eval metadata for ade v2 | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T17:36:20 | 2022-05-12T13:29:51 | 2022-05-12T13:22:19 | Adding metadata to allow evaluation | sashavor | https://github.com/huggingface/datasets/pull/4319 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4319",
"html_url": "https://github.com/huggingface/datasets/pull/4319",
"diff_url": "https://github.com/huggingface/datasets/pull/4319.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4319.patch",
"merged_at": "2022-05-12T13:22... | true |
1,232,905,488 | 4,318 | Don't check f.loc in _get_extraction_protocol_with_magic_number | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T16:27:09 | 2022-05-11T16:57:02 | 2022-05-11T16:46:31 | `f.loc` doesn't always exist for file-like objects in python. I removed it since it was not necessary anyway (we always seek the file to 0 after reading the magic number)
Fix https://github.com/huggingface/datasets/issues/4310 | lhoestq | https://github.com/huggingface/datasets/pull/4318 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4318",
"html_url": "https://github.com/huggingface/datasets/pull/4318",
"diff_url": "https://github.com/huggingface/datasets/pull/4318.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4318.patch",
"merged_at": "2022-05-11T16:46... | true |
1,232,737,401 | 4,317 | Fix cnn_dailymail (dm stories were ignored) | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T14:25:25 | 2022-05-11T16:00:09 | 2022-05-11T15:52:37 | https://github.com/huggingface/datasets/pull/4188 introduced a bug in `datasets` 2.2.0: DailyMail stories are ignored when generating the dataset.
I fixed that, and removed the google drive link (it has annoying quota limitations issues)
We can do a patch release after this is merged | lhoestq | https://github.com/huggingface/datasets/pull/4317 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4317",
"html_url": "https://github.com/huggingface/datasets/pull/4317",
"diff_url": "https://github.com/huggingface/datasets/pull/4317.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4317.patch",
"merged_at": "2022-05-11T15:52... | true |
1,232,681,207 | 4,316 | Support passing config_kwargs to CLI run_beam | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T13:53:37 | 2022-05-11T14:36:49 | 2022-05-11T14:28:31 | This PR supports passing `config_kwargs` to CLI run_beam, so that for example for "wikipedia" dataset, we can pass:
```
--date 20220501 --language ca
``` | albertvillanova | https://github.com/huggingface/datasets/pull/4316 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4316",
"html_url": "https://github.com/huggingface/datasets/pull/4316",
"diff_url": "https://github.com/huggingface/datasets/pull/4316.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4316.patch",
"merged_at": "2022-05-11T14:28... | true |
1,232,549,330 | 4,315 | Fix CLI run_beam namespace | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T12:21:00 | 2022-05-11T13:13:00 | 2022-05-11T13:05:08 | Currently, it raises TypeError:
```
TypeError: __init__() got an unexpected keyword argument 'namespace'
``` | albertvillanova | https://github.com/huggingface/datasets/pull/4315 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4315",
"html_url": "https://github.com/huggingface/datasets/pull/4315",
"diff_url": "https://github.com/huggingface/datasets/pull/4315.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4315.patch",
"merged_at": "2022-05-11T13:05... | true |
1,232,326,726 | 4,314 | Catch pull error when mirroring | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T09:38:35 | 2022-05-11T12:54:07 | 2022-05-11T12:46:42 | Catch pull errors when mirroring so that the script continues to update the other datasets.
The error will still be printed at the end of the job. In this case the job also fails, and asks to manually update the datasets that failed. | lhoestq | https://github.com/huggingface/datasets/pull/4314 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4314",
"html_url": "https://github.com/huggingface/datasets/pull/4314",
"diff_url": "https://github.com/huggingface/datasets/pull/4314.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4314.patch",
"merged_at": "2022-05-11T12:46... | true |
1,231,764,100 | 4,313 | Add API code examples for Builder classes | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-10T22:22:32 | 2022-05-12T17:02:43 | 2022-05-12T12:36:57 | This PR adds API code examples for the Builder classes. | stevhliu | https://github.com/huggingface/datasets/pull/4313 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4313",
"html_url": "https://github.com/huggingface/datasets/pull/4313",
"diff_url": "https://github.com/huggingface/datasets/pull/4313.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4313.patch",
"merged_at": "2022-05-12T12:36... | true |
1,231,662,775 | 4,312 | added TR-News dataset | closed | [
"Thanks for your contribution, @batubayk.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nI would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] | 2022-05-10T20:33:00 | 2022-10-03T09:36:45 | 2022-10-03T09:36:45 | null | batubayk | https://github.com/huggingface/datasets/pull/4312 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4312",
"html_url": "https://github.com/huggingface/datasets/pull/4312",
"diff_url": "https://github.com/huggingface/datasets/pull/4312.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4312.patch",
"merged_at": null
} | true |
1,231,369,438 | 4,311 | [Imagefolder] Docs + Don't infer labels from file names when there are metadata + Error messages when metadata and images aren't linked correctly | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging this one since mario is off, I took care of adding some tests to make sure everything is fine. Will do the release after it"
] | 2022-05-10T15:52:15 | 2022-05-10T17:19:42 | 2022-05-10T17:11:47 | I updated the `docs/source/image_process.mdx` documentation and added an example for image captioning and object detection using `ImageFolder`.
While doing so I also improved a few aspects:
- we don't need to infer labels from file names when there are metadata - they can just be in the metadata if necessary
- rai... | lhoestq | https://github.com/huggingface/datasets/pull/4311 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4311",
"html_url": "https://github.com/huggingface/datasets/pull/4311",
"diff_url": "https://github.com/huggingface/datasets/pull/4311.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4311.patch",
"merged_at": "2022-05-10T17:11... | true |
1,231,319,815 | 4,310 | Loading dataset with streaming: '_io.BufferedReader' object has no attribute 'loc' | closed | [] | 2022-05-10T15:12:53 | 2022-05-11T16:46:31 | 2022-05-11T16:46:31 | ## Describe the bug
Loading a datasets with `load_dataset` and `streaming=True` returns `AttributeError: '_io.BufferedReader' object has no attribute 'loc'`. Notice that loading with `streaming=False` works fine.
In the following steps we load parquet files but the same happens with pickle files. The problem seems ... | milmin | https://github.com/huggingface/datasets/issues/4310 | null | false |
1,231,232,935 | 4,309 | [WIP] Add TEDLIUM dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache')\r\n```\r\n\r\n```\r\nDownloading and preparing dataset tedlium/release1 to /home/sanchit... | 2022-05-10T14:12:47 | 2022-06-17T12:54:40 | 2022-06-17T11:44:01 | Adds the TED-LIUM dataset https://www.tensorflow.org/datasets/catalog/tedlium#tedliumrelease3
TODO:
- [x] Port `tedium.py` from TF datasets using `convert_dataset.sh` script
- [x] Make `load_dataset` work
- [ ] ~~Run `datasets-cli` command to generate `dataset_infos.json`~~
- [ ] ~~Create dummy data for conti... | sanchit-gandhi | https://github.com/huggingface/datasets/pull/4309 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4309",
"html_url": "https://github.com/huggingface/datasets/pull/4309",
"diff_url": "https://github.com/huggingface/datasets/pull/4309.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4309.patch",
"merged_at": null
} | true |
1,231,217,783 | 4,308 | Remove unused multiprocessing args from test CLI | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-10T14:02:15 | 2022-05-11T12:58:25 | 2022-05-11T12:50:43 | Multiprocessing is not used in the test CLI. | albertvillanova | https://github.com/huggingface/datasets/pull/4308 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4308",
"html_url": "https://github.com/huggingface/datasets/pull/4308",
"diff_url": "https://github.com/huggingface/datasets/pull/4308.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4308.patch",
"merged_at": "2022-05-11T12:50... | true |
1,231,175,639 | 4,307 | Add packaged builder configs to the documentation | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-10T13:34:19 | 2022-05-10T14:03:50 | 2022-05-10T13:55:54 | Add the packaged builders configurations to the docs reference is useful to show the list of all parameters one can use when loading data in many formats: CSV, JSON, etc. | lhoestq | https://github.com/huggingface/datasets/pull/4307 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4307",
"html_url": "https://github.com/huggingface/datasets/pull/4307",
"diff_url": "https://github.com/huggingface/datasets/pull/4307.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4307.patch",
"merged_at": "2022-05-10T13:55... | true |
1,231,137,204 | 4,306 | `load_dataset` does not work with certain filename. | closed | [
"Never mind. It is because of the caching of datasets..."
] | 2022-05-10T13:14:04 | 2022-05-10T18:58:36 | 2022-05-10T18:58:09 | ## Describe the bug
This is a weird bug that took me some time to find out.
I have a JSON dataset that I want to load with `load_dataset` like this:
```
data_files = dict(train="train.json.zip", val="val.json.zip")
dataset = load_dataset("json", data_files=data_files, field="data")
```
## Expected results
... | whatever60 | https://github.com/huggingface/datasets/issues/4306 | null | false |
1,231,099,934 | 4,305 | Fixes FrugalScore | open | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4305). All of your documentation changes will be reflected on that endpoint.",
"> predictions and references are swapped. Basically Frugalscore is commutative, however some tiny differences can occur if we swap the references a... | 2022-05-10T12:44:06 | 2022-09-22T16:42:06 | null | There are two minor modifications in this PR:
1) `predictions` and `references` are swapped. Basically Frugalscore is commutative, however some tiny differences can occur if we swap the references and the predictions. I decided to swap them just to obtain the exact results as reported in the paper.
2) I switched to d... | moussaKam | https://github.com/huggingface/datasets/pull/4305 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4305",
"html_url": "https://github.com/huggingface/datasets/pull/4305",
"diff_url": "https://github.com/huggingface/datasets/pull/4305.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4305.patch",
"merged_at": null
} | true |
1,231,047,051 | 4,304 | Language code search does direct matches | open | [
"Thanks for reporting ! I forwarded the issue to the front-end team :)\r\n\r\nWill keep you posted !\r\n\r\nI also changed the tagging app to suggest two letters code for now."
] | 2022-05-10T11:59:16 | 2022-05-10T12:38:42 | null | ## Describe the bug
Hi. Searching for bcp47 tags that are just the language prefix (e.g. `sq` or `da`) excludes datasets that have added extra information in their language metadata (e.g. `sq-AL` or `da-bornholm`). The example codes given in the [tagging app](https://huggingface.co/spaces/huggingface/datasets-taggin... | leondz | https://github.com/huggingface/datasets/issues/4304 | null | false |
1,230,867,728 | 4,303 | Fix: Add missing comma | closed | [
"The CI failure is unrelated to this PR and fixed on master, merging :)"
] | 2022-05-10T09:21:38 | 2022-05-11T08:50:15 | 2022-05-11T08:50:14 | null | mrm8488 | https://github.com/huggingface/datasets/pull/4303 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4303",
"html_url": "https://github.com/huggingface/datasets/pull/4303",
"diff_url": "https://github.com/huggingface/datasets/pull/4303.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4303.patch",
"merged_at": "2022-05-11T08:50... | true |
1,230,651,117 | 4,302 | Remove hacking license tags when mirroring datasets on the Hub | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The Hub doesn't allow these characters in the YAML tags, and git push fails if you want to push a dataset card containing these characters.",
"Ok, let me rename the bad config names :) I think I can also keep backward compatibility... | 2022-05-10T05:52:46 | 2022-05-20T09:48:30 | 2022-05-20T09:40:20 | Currently, when mirroring datasets on the Hub, the license tags are hacked: removed of characters "." and "$". On the contrary, this hacking is not applied to community datasets on the Hub. This generates multiple variants of the same tag on the Hub.
I guess this hacking is no longer necessary:
- it is not applied... | albertvillanova | https://github.com/huggingface/datasets/pull/4302 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4302",
"html_url": "https://github.com/huggingface/datasets/pull/4302",
"diff_url": "https://github.com/huggingface/datasets/pull/4302.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4302.patch",
"merged_at": null
} | true |
1,230,401,256 | 4,301 | Add ImageNet-Sketch dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I think you can go ahead with uploading the data, and also ping the author in parallel. I think the images may subject to copyright anyway (scrapped from google image) so the dataset author is not allowed to set a license to the data... | 2022-05-09T23:38:45 | 2022-05-23T18:14:14 | 2022-05-23T18:05:29 | This PR adds the ImageNet-Sketch dataset and resolves #3953 . | nateraw | https://github.com/huggingface/datasets/pull/4301 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4301",
"html_url": "https://github.com/huggingface/datasets/pull/4301",
"diff_url": "https://github.com/huggingface/datasets/pull/4301.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4301.patch",
"merged_at": "2022-05-23T18:05... | true |
1,230,272,761 | 4,300 | Add API code examples for loading methods | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-09T21:30:26 | 2022-05-25T16:23:15 | 2022-05-25T09:20:13 | This PR adds API code examples for loading methods, let me know if I've missed any important parameters we should showcase :)
I was a bit confused about `inspect_dataset` and `inspect_metric`. The `path` parameter says it will accept a dataset identifier from the Hub. But when I try the identifier `rotten_tomatoes`,... | stevhliu | https://github.com/huggingface/datasets/pull/4300 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4300",
"html_url": "https://github.com/huggingface/datasets/pull/4300",
"diff_url": "https://github.com/huggingface/datasets/pull/4300.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4300.patch",
"merged_at": "2022-05-25T09:20... | true |
1,230,236,782 | 4,299 | Remove manual download from imagenet-1k | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the reviews @apsdehal and @lhoestq! As suggested by @lhoestq, I'll separate the train/val/test splits, apply the validation split fixes and shuffle the images files to simplify the script and make streaming faster.",
"@a... | 2022-05-09T20:49:18 | 2022-05-25T14:54:59 | 2022-05-25T14:46:16 | Remove the manual download code from `imagenet-1k` to make it a regular dataset. | mariosasko | https://github.com/huggingface/datasets/pull/4299 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4299",
"html_url": "https://github.com/huggingface/datasets/pull/4299",
"diff_url": "https://github.com/huggingface/datasets/pull/4299.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4299.patch",
"merged_at": "2022-05-25T14:46... | true |
1,229,748,006 | 4,298 | Normalise license names | closed | [
"we'll add the same server-side metadata validation system as for hf.co/models soon-ish\r\n\r\n(you can check on hf.co/models that licenses are \"clean\")",
"Fixed by #4367."
] | 2022-05-09T13:51:32 | 2022-05-20T09:51:50 | 2022-05-20T09:51:50 | **Is your feature request related to a problem? Please describe.**
When browsing datasets, the Licenses tag cloud (bottom left of e.g. https://huggingface.co/datasets) has multiple variants of the same license. This means the options exclude datasets arbitrarily, giving users artificially low recall. The cause of the ... | leondz | https://github.com/huggingface/datasets/issues/4298 | null | false |
1,229,735,498 | 4,297 | Datasets YAML tagging space is down | closed | [
"@lhoestq @albertvillanova `update-task-list` branch does not exist anymore, should point to `main` now i guess",
"Thanks for reporting, fixing it now",
"It's up again :)"
] | 2022-05-09T13:45:05 | 2022-05-09T14:44:25 | 2022-05-09T14:44:25 | ## Describe the bug
The neat hf spaces app for generating YAML tags for dataset `README.md`s is down
## Steps to reproduce the bug
1. Visit https://huggingface.co/spaces/huggingface/datasets-tagging
## Expected results
There'll be a HF spaces web app for generating dataset metadata YAML
## Actual results
T... | leondz | https://github.com/huggingface/datasets/issues/4297 | null | false |
1,229,554,645 | 4,296 | Fix URL query parameters in compression hop path when streaming | open | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4296). All of your documentation changes will be reflected on that endpoint."
] | 2022-05-09T11:18:22 | 2022-07-06T15:19:53 | null | Fix #3488. | albertvillanova | https://github.com/huggingface/datasets/pull/4296 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4296",
"html_url": "https://github.com/huggingface/datasets/pull/4296",
"diff_url": "https://github.com/huggingface/datasets/pull/4296.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4296.patch",
"merged_at": null
} | true |
1,229,527,283 | 4,295 | Fix missing lz4 dependency for tests | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-09T10:53:20 | 2022-05-09T11:21:22 | 2022-05-09T11:13:44 | Currently, `lz4` is not defined as a dependency for tests. Therefore, all tests marked with `@require_lz4` are skipped. | albertvillanova | https://github.com/huggingface/datasets/pull/4295 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4295",
"html_url": "https://github.com/huggingface/datasets/pull/4295",
"diff_url": "https://github.com/huggingface/datasets/pull/4295.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4295.patch",
"merged_at": "2022-05-09T11:13... | true |
1,229,455,582 | 4,294 | Fix CLI run_beam save_infos | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-09T09:47:43 | 2022-05-10T07:04:04 | 2022-05-10T06:56:10 | Currently, it raises TypeError:
```
TypeError: _download_and_prepare() got an unexpected keyword argument 'save_infos'
``` | albertvillanova | https://github.com/huggingface/datasets/pull/4294 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4294",
"html_url": "https://github.com/huggingface/datasets/pull/4294",
"diff_url": "https://github.com/huggingface/datasets/pull/4294.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4294.patch",
"merged_at": "2022-05-10T06:56... | true |
1,228,815,477 | 4,293 | Fix wrong map parameter name in cache docs | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-08T07:27:46 | 2022-06-14T16:49:00 | 2022-06-14T16:07:00 | The `load_from_cache` parameter of `map` should be `load_from_cache_file`. | h4iku | https://github.com/huggingface/datasets/pull/4293 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4293",
"html_url": "https://github.com/huggingface/datasets/pull/4293",
"diff_url": "https://github.com/huggingface/datasets/pull/4293.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4293.patch",
"merged_at": "2022-06-14T16:07... | true |
1,228,216,788 | 4,292 | Add API code examples for remaining main classes | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-06T18:15:31 | 2022-05-25T18:05:13 | 2022-05-25T17:56:36 | This PR adds API code examples for the remaining functions in the Main classes. I wasn't too familiar with some of the functions (`decode_batch`, `decode_column`, `decode_example`, etc.) so please feel free to add an example of usage and I can fill in the rest :) | stevhliu | https://github.com/huggingface/datasets/pull/4292 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4292",
"html_url": "https://github.com/huggingface/datasets/pull/4292",
"diff_url": "https://github.com/huggingface/datasets/pull/4292.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4292.patch",
"merged_at": "2022-05-25T17:56... | true |
1,227,777,500 | 4,291 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message | closed | [
"Hi @leondz, thanks for reporting.\r\n\r\nIndeed, the dataset viewer relies on the dataset being streamable (passing `streaming=True` to `load_dataset`). Whereas most of the datastes are streamable out of the box (thanks to our implementation of streaming), there are still some exceptions.\r\n\r\nIn particular, in ... | 2022-05-06T12:03:27 | 2022-05-09T08:25:58 | 2022-05-09T08:25:58 | ### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss?
### Owner
Yes | leondz | https://github.com/huggingface/datasets/issues/4291 | null | false |
1,227,592,826 | 4,290 | Update paper link in medmcqa dataset card | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@albertvillanova Kindly check :)"
] | 2022-05-06T08:52:51 | 2022-09-30T11:51:28 | 2022-09-30T11:49:07 | Updating readme in medmcqa dataset. | monk1337 | https://github.com/huggingface/datasets/pull/4290 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4290",
"html_url": "https://github.com/huggingface/datasets/pull/4290",
"diff_url": "https://github.com/huggingface/datasets/pull/4290.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4290.patch",
"merged_at": "2022-09-30T11:49... | true |
1,226,821,732 | 4,288 | Add missing `faiss` import to fix https://github.com/huggingface/datasets/issues/4287 | closed | [] | 2022-05-05T15:21:49 | 2022-05-10T12:55:06 | 2022-05-10T12:09:48 | This PR fixes the issue recently mentioned in https://github.com/huggingface/datasets/issues/4287 🤗 | alvarobartt | https://github.com/huggingface/datasets/pull/4288 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4288",
"html_url": "https://github.com/huggingface/datasets/pull/4288",
"diff_url": "https://github.com/huggingface/datasets/pull/4288.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4288.patch",
"merged_at": "2022-05-10T12:09... | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.