id int64 599M 3.48B | number int64 1 7.8k | title stringlengths 1 290 | state stringclasses 2
values | comments listlengths 0 30 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-10-05 06:37:50 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-10-05 10:32:43 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-10-01 13:56:03 β | body stringlengths 0 228k β | user stringlengths 3 26 | html_url stringlengths 46 51 | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,287,941,058 | 4,590 | Generalize meta_path json file creation in load.py [#4540] | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@albertvillanova, Can you please review this PR for Issue #4540 ",
"@lhoestq Thank you for merging the PR . Is there a slack channel for contributing to the datasets library. I would love to work on the library and make meaningfu... | 2022-06-28T21:48:06 | 2022-07-08T14:55:13 | 2022-07-07T13:17:45 | # What does this PR do?
## Summary
*In function `_copy_script_and_other_resources_in_importable_dir`, using string split when generating `meta_path` throws error in the edge case raised in #4540.*
## Additions
-
## Changes
- Changed meta_path to use `os.path.splitext` instead of using `str.split` to gener... | VijayKalmath | https://github.com/huggingface/datasets/pull/4590 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4590",
"html_url": "https://github.com/huggingface/datasets/pull/4590",
"diff_url": "https://github.com/huggingface/datasets/pull/4590.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4590.patch",
"merged_at": "2022-07-07T13:17... | true |
1,287,600,029 | 4,589 | Permission denied: '/home/.cache' when load_dataset with local script | closed | [] | 2022-06-28T16:26:03 | 2022-06-29T06:26:28 | 2022-06-29T06:25:08 | null | jiangh0 | https://github.com/huggingface/datasets/issues/4589 | null | false |
1,287,368,751 | 4,588 | Host head_qa data on the Hub and fix NonMatchingChecksumError | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @albertvillanova ! Thanks for the fix ;)\r\nCan I safely checkout from this branch to build `datasets` or it is preferable to wait until all CI tests pass?\r\nThanks π ",
"@younesbelkada we have just merged this PR."
] | 2022-06-28T13:39:28 | 2022-07-05T16:01:15 | 2022-07-05T15:49:52 | This PR:
- Hosts head_qa data on the Hub instead of Google Drive
- Fixes NonMatchingChecksumError
Fix https://huggingface.co/datasets/head_qa/discussions/1 | albertvillanova | https://github.com/huggingface/datasets/pull/4588 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4588",
"html_url": "https://github.com/huggingface/datasets/pull/4588",
"diff_url": "https://github.com/huggingface/datasets/pull/4588.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4588.patch",
"merged_at": "2022-07-05T15:49... | true |
1,287,291,494 | 4,587 | Validate new_fingerprint passed by user | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-28T12:46:21 | 2022-06-28T14:11:57 | 2022-06-28T14:00:44 | Users can pass the dataset fingerprint they want in `map` and other dataset transforms.
However the fingerprint is used to name cache files so we need to make sure it doesn't contain bad characters as mentioned in https://github.com/huggingface/datasets/issues/1718, and that it's not too long | lhoestq | https://github.com/huggingface/datasets/pull/4587 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4587",
"html_url": "https://github.com/huggingface/datasets/pull/4587",
"diff_url": "https://github.com/huggingface/datasets/pull/4587.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4587.patch",
"merged_at": "2022-06-28T14:00... | true |
1,287,105,636 | 4,586 | Host pn_summary data on the Hub instead of Google Drive | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-28T10:05:05 | 2022-06-28T14:52:56 | 2022-06-28T14:42:03 | Fix #4581. | albertvillanova | https://github.com/huggingface/datasets/pull/4586 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4586",
"html_url": "https://github.com/huggingface/datasets/pull/4586",
"diff_url": "https://github.com/huggingface/datasets/pull/4586.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4586.patch",
"merged_at": "2022-06-28T14:42... | true |
1,287,064,929 | 4,585 | Host multi_news data on the Hub instead of Google Drive | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-28T09:32:06 | 2022-06-28T14:19:35 | 2022-06-28T14:08:48 | Host data files of multi_news dataset on the Hub.
They were on Google Drive.
Fix #4580. | albertvillanova | https://github.com/huggingface/datasets/pull/4585 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4585",
"html_url": "https://github.com/huggingface/datasets/pull/4585",
"diff_url": "https://github.com/huggingface/datasets/pull/4585.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4585.patch",
"merged_at": "2022-06-28T14:08... | true |
1,286,911,993 | 4,584 | Add binary classification task IDs | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4584). All of your documentation changes will be reflected on that endpoint.",
"> Awesome thanks ! Can you add it to https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts first please ? This is where ... | 2022-06-28T07:30:39 | 2023-09-24T10:04:04 | 2023-01-26T09:27:52 | As a precursor to aligning the task IDs in `datasets` and AutoTrain, we need a way to distinguish binary vs multiclass vs multilabel classification.
This PR adds binary classification to the task IDs to enable this.
Related AutoTrain issue: https://github.com/huggingface/autonlp-backend/issues/597
cc @abhishek... | lewtun | https://github.com/huggingface/datasets/pull/4584 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4584",
"html_url": "https://github.com/huggingface/datasets/pull/4584",
"diff_url": "https://github.com/huggingface/datasets/pull/4584.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4584.patch",
"merged_at": null
} | true |
1,286,790,871 | 4,583 | <code> implementation of FLAC support using torchaudio | closed | [] | 2022-06-28T05:24:21 | 2022-06-28T05:47:02 | 2022-06-28T05:47:02 | I had added Audio FLAC support with torchaudio given that Librosa and SoundFile can give problems. Also, FLAC is been used as audio from https://mlcommons.org/en/peoples-speech/ | rafael-ariascalles | https://github.com/huggingface/datasets/pull/4583 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4583",
"html_url": "https://github.com/huggingface/datasets/pull/4583",
"diff_url": "https://github.com/huggingface/datasets/pull/4583.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4583.patch",
"merged_at": null
} | true |
1,286,517,060 | 4,582 | add_column should preserve _indexes | open | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4582). All of your documentation changes will be reflected on that endpoint."
] | 2022-06-27T22:35:47 | 2022-07-06T15:19:54 | null | https://github.com/huggingface/datasets/issues/3769#issuecomment-1167146126
doing `.add_column("x",x_data)` also removed any `_indexes` on the dataset, decided this shouldn't be the case.
This was because `add_column` was creating a new `Dataset(...)` and wasn't possible to pass indexes on init.
with this PR now... | cceyda | https://github.com/huggingface/datasets/pull/4582 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4582",
"html_url": "https://github.com/huggingface/datasets/pull/4582",
"diff_url": "https://github.com/huggingface/datasets/pull/4582.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4582.patch",
"merged_at": null
} | true |
1,286,362,907 | 4,581 | Dataset Viewer issue for pn_summary | closed | [
"linked to https://github.com/huggingface/datasets/issues/4580#issuecomment-1168373066?",
"Note that I refreshed twice this dataset, and I still have (another) error on one of the splits\r\n\r\n```\r\nStatus code: 400\r\nException: ClientResponseError\r\nMessage: 403, message='Forbidden', url=URL('htt... | 2022-06-27T20:56:12 | 2022-06-28T14:42:03 | 2022-06-28T14:42:03 | ### Link
https://huggingface.co/datasets/pn_summary/viewer/1.0.0/validation
### Description
Getting an index error on the `validation` and `test` splits:
```
Server error
Status code: 400
Exception: IndexError
Message: list index out of range
```
### Owner
No | lewtun | https://github.com/huggingface/datasets/issues/4581 | null | false |
1,286,312,912 | 4,580 | Dataset Viewer issue for multi_news | closed | [
"Thanks for reporting, @lewtun.\r\n\r\nI forced the refreshing of the preview and it worked OK for train and validation splits.\r\n\r\nI guess the error has to do with the data files being hosted at Google Drive: this gives errors when requested automatically using scripts.\r\nWe should host them to fix the error. ... | 2022-06-27T20:25:25 | 2022-06-28T14:08:48 | 2022-06-28T14:08:48 | ### Link
https://huggingface.co/datasets/multi_news
### Description
Not sure what the index error is referring to here:
```
Status code: 400
Exception: IndexError
Message: list index out of range
```
### Owner
No | lewtun | https://github.com/huggingface/datasets/issues/4580 | null | false |
1,286,106,285 | 4,579 | Support streaming cfq dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq I've been refactoring a little the code:\r\n- Use less RAM by loading only the required samples: only if its index is in the splits file\r\n- Start yielding \"earlier\" in streaming mode: for each `split_idx`:\r\n - either ... | 2022-06-27T17:11:23 | 2022-07-04T19:35:01 | 2022-07-04T19:23:57 | Support streaming cfq dataset. | albertvillanova | https://github.com/huggingface/datasets/pull/4579 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4579",
"html_url": "https://github.com/huggingface/datasets/pull/4579",
"diff_url": "https://github.com/huggingface/datasets/pull/4579.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4579.patch",
"merged_at": "2022-07-04T19:23... | true |
1,286,086,400 | 4,578 | [Multi Configs] Use directories to differentiate between subsets/configurations | open | [
"I want to be able to create folders in a model.",
"How to set new split names, instead of train/test/validation? For example, I have a local dataset, consists of several subsets, named \"A\", \"B\", and \"C\". How can I create a huggingface dataset, with splits A/B/C ?\r\n\r\nThe document in https://huggingface.... | 2022-06-27T16:55:11 | 2023-06-14T15:43:05 | null | Currently to define several subsets/configurations of your dataset, you need to use a dataset script.
However it would be nice to have a no-code way to to this.
For example we could specify different configurations of a dataset (for example, if a dataset contains different languages) with one directory per confi... | lhoestq | https://github.com/huggingface/datasets/issues/4578 | null | false |
1,285,703,775 | 4,577 | Add authentication tip to `load_dataset` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-27T12:05:34 | 2022-07-04T13:13:15 | 2022-07-04T13:01:30 | Add an authentication tip similar to the one in transformers' `PreTrainedModel.from_pretrained` to `load_dataset`/`load_dataset_builder`. | mariosasko | https://github.com/huggingface/datasets/pull/4577 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4577",
"html_url": "https://github.com/huggingface/datasets/pull/4577",
"diff_url": "https://github.com/huggingface/datasets/pull/4577.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4577.patch",
"merged_at": "2022-07-04T13:01... | true |
1,285,698,576 | 4,576 | Include `metadata.jsonl` in resolved data files | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I still don't know if the way we implemented data files resolution could support the metadata.jsonl file without bad side effects for the other packaged builders. In particular here if you have a folder of csv/parquet/whatever files ... | 2022-06-27T12:01:29 | 2022-07-01T12:44:55 | 2022-06-30T10:15:32 | Include `metadata.jsonl` in resolved data files.
Fix #4548
@lhoestq ~~https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 adds support for metadata files placed at the root, and https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 accounts fo... | mariosasko | https://github.com/huggingface/datasets/pull/4576 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4576",
"html_url": "https://github.com/huggingface/datasets/pull/4576",
"diff_url": "https://github.com/huggingface/datasets/pull/4576.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4576.patch",
"merged_at": "2022-06-30T10:15... | true |
1,285,446,700 | 4,575 | Problem about wmt17 zh-en dataset | closed | [
"Running into the same error with `wmt17/zh-en`, `wmt18/zh-en` and `wmt19/zh-en`.",
"@albertvillanova @lhoestq Could you take a look at this issue?",
"@winterfell2021 Hi, I wonder where the code you provided should be added. I tried to add them in the `datasets/table.py` in `array_cast` function, however, the '... | 2022-06-27T08:35:42 | 2022-08-23T10:01:02 | 2022-08-23T10:00:21 | It seems that in subset casia2015, some samples are like `{'c[hn]':'xxx', 'en': 'aa'}`.
So when using `data = load_dataset('wmt17', "zh-en")` to load the wmt17 zh-en dataset, which will raise the exception:
```
Traceback (most recent call last):
File "train.py", line 78, in <module>
data = load_dataset(args.... | winterfell2021 | https://github.com/huggingface/datasets/issues/4575 | null | false |
1,285,380,616 | 4,574 | Support streaming mlsum dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"After unpinning `s3fs` and pinning `fsspec[http]>=2021.11.1`, the CI installs\r\n- `fsspec-2022.1.0`\r\n- `s3fs-0.5.1`\r\n\r\nand raises the following error:\r\n```\r\n ImportError while loading conftest '/home/runner/work/datasets/d... | 2022-06-27T07:37:03 | 2022-07-21T13:37:30 | 2022-07-21T12:40:00 | Support streaming mlsum dataset.
This PR:
- pins `fsspec` min version with fixed BlockSizeError: `fsspec[http]>=2021.11.1`
- https://github.com/fsspec/filesystem_spec/pull/830
- unpins `s3fs==2021.08.1` to align it with `fsspec` requirement: `s3fs>=2021.11.1`
> s3fs 2021.8.1 requires fsspec==2021.08.1
- s... | albertvillanova | https://github.com/huggingface/datasets/pull/4574 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4574",
"html_url": "https://github.com/huggingface/datasets/pull/4574",
"diff_url": "https://github.com/huggingface/datasets/pull/4574.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4574.patch",
"merged_at": "2022-07-21T12:40... | true |
1,285,023,629 | 4,573 | Fix evaluation metadata for ncbi_disease | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets."
] | 2022-06-26T20:29:32 | 2023-09-24T09:35:07 | 2022-09-23T09:38:02 | This PR fixes the task in the evaluation metadata and removes the metrics info as we've decided this is not a great way to propagate this information downstream. | lewtun | https://github.com/huggingface/datasets/pull/4573 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4573",
"html_url": "https://github.com/huggingface/datasets/pull/4573",
"diff_url": "https://github.com/huggingface/datasets/pull/4573.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4573.patch",
"merged_at": null
} | true |
1,285,022,499 | 4,572 | Dataset Viewer issue for mlsum | closed | [
"Thanks for reporting, @lewtun.\r\n\r\nAfter investigation, it seems that the server https://gitlab.lip6.fr does not allow HTTP Range requests.\r\n\r\nWe are trying to find a workaround..."
] | 2022-06-26T20:24:17 | 2022-07-21T12:40:01 | 2022-07-21T12:40:01 | ### Link
https://huggingface.co/datasets/mlsum/viewer/de/train
### Description
There's seems to be a problem with the download / streaming of this dataset:
```
Server error
Status code: 400
Exception: BadZipFile
Message: File is not a zip file
```
### Owner
No | lewtun | https://github.com/huggingface/datasets/issues/4572 | null | false |
1,284,883,289 | 4,571 | move under the facebook org? | open | [
"Related to https://github.com/huggingface/datasets/issues/4562#issuecomment-1166911751\r\n\r\nI'll assign @albertvillanova ",
"I'm just wondering why we don't have this dataset under:\r\n- the `facebook` namespace\r\n- or the canonical dataset `flores`: why does this only have 2 languages?",
"fwiw: the dataset... | 2022-06-26T11:19:09 | 2023-09-25T12:05:18 | null | ### Link
https://huggingface.co/datasets/gsarti/flores_101
### Description
It seems like streaming isn't supported for this dataset:
```
Server Error
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://dl.fbaipublicfiles.com/flores101/dataset... | lewtun | https://github.com/huggingface/datasets/issues/4571 | null | false |
1,284,846,168 | 4,570 | Dataset sharding non-contiguous? | closed | [
"This was silly; I was sure I'd looked for a `contiguous` argument, and was certain there wasn't one the first time I looked :smile:\r\n\r\nSorry about that.",
"Hi! You can pass `contiguous=True` to `.shard()` get contiguous shards. More info on this and the default behavior can be found in the [docs](https://hug... | 2022-06-26T08:34:05 | 2022-06-30T11:00:47 | 2022-06-26T14:36:20 | ## Describe the bug
I'm not sure if this is a bug; more likely normal behavior but i wanted to double check.
Is it normal that `datasets.shard` does not produce chunks that, when concatenated produce the original ordering of the sharded dataset?
This might be related to this pull request (https://github.com/huggi... | cakiki | https://github.com/huggingface/datasets/issues/4570 | null | false |
1,284,833,694 | 4,569 | Dataset Viewer issue for sst2 | closed | [
"Hi @lewtun, thanks for reporting.\r\n\r\nI have checked locally and refreshed the preview and it seems working smooth now:\r\n```python\r\nIn [8]: ds\r\nOut[8]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 67349\r\n })\r\n validation: Datas... | 2022-06-26T07:32:54 | 2022-06-27T06:37:48 | 2022-06-27T06:37:48 | ### Link
https://huggingface.co/datasets/sst2
### Description
Not sure what is causing this, however it seems that `load_dataset("sst2")` also hangs (even though it downloads the files without problem):
```
Status code: 400
Exception: Exception
Message: Give up after 5 attempts with Connectio... | lewtun | https://github.com/huggingface/datasets/issues/4569 | null | false |
1,284,655,624 | 4,568 | XNLI cache reload is very slow | closed | [
"Hi,\r\nCould you tell us how you are running this code?\r\nI tested on my machine (M1 Mac). And it is running fine both on and off internet.\r\n\r\n<img width=\"1033\" alt=\"Screen Shot 2022-07-03 at 1 32 25 AM\" src=\"https://user-images.githubusercontent.com/8711912/177026364-4ad7cedb-e524-4513-97f7-7961bbb34c90... | 2022-06-25T16:43:56 | 2022-07-04T14:29:40 | 2022-07-04T14:29:40 | ### Reproduce
Using `2.3.3.dev0`
`from datasets import load_dataset`
`load_dataset("xnli", "en")`
Turn off Internet
`load_dataset("xnli", "en")`
I cancelled the second `load_dataset` eventually cuz it took super long. It would be great to have something to specify e.g. `only_load_from_cache` and avoid the ... | Muennighoff | https://github.com/huggingface/datasets/issues/4568 | null | false |
1,284,528,474 | 4,567 | Add evaluation data for amazon_reviews_multi | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets."
] | 2022-06-25T09:40:52 | 2023-09-24T09:35:22 | 2022-09-23T09:37:23 | null | lewtun | https://github.com/huggingface/datasets/pull/4567 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4567",
"html_url": "https://github.com/huggingface/datasets/pull/4567",
"diff_url": "https://github.com/huggingface/datasets/pull/4567.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4567.patch",
"merged_at": null
} | true |
1,284,397,594 | 4,566 | Document link #load_dataset_enhancing_performance points to nowhere | closed | [
"Hi! This is indeed the link the docstring should point to. Are you interested in submitting a PR to fix this?",
"https://github.com/huggingface/datasets/blame/master/docs/source/cache.mdx#L93\r\n\r\nThere seems already an anchor here. Somehow it doesn't work. I am not very familiar with how this online documenta... | 2022-06-25T01:18:19 | 2023-01-24T16:33:40 | 2023-01-24T16:33:40 | ## Describe the bug
A clear and concise description of what the bug is.

The [load_dataset_enhancing_performance](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#load_dat... | subercui | https://github.com/huggingface/datasets/issues/4566 | null | false |
1,284,141,666 | 4,565 | Add UFSC OCPap dataset | closed | [
"I will add this directly on the hub (same as #4486)βin https://huggingface.co/lapix"
] | 2022-06-24T20:07:54 | 2022-07-06T19:03:02 | 2022-07-06T19:03:02 | ## Adding a Dataset
- **Name:** UFSC OCPap: Papanicolaou Stained Oral Cytology Dataset (v4)
- **Description:** The UFSC OCPap dataset comprises 9,797 labeled images of 1200x1600 pixels acquired from 5 slides of cancer diagnosed and 3 healthy of oral brush samples, from distinct patients.
- **Paper:** https://dx.doi.... | johnnv1 | https://github.com/huggingface/datasets/issues/4565 | null | false |
1,283,932,333 | 4,564 | Support streaming bookcorpus dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-24T16:13:39 | 2022-07-06T09:34:48 | 2022-07-06T09:23:04 | Support streaming bookcorpus dataset. | albertvillanova | https://github.com/huggingface/datasets/pull/4564 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4564",
"html_url": "https://github.com/huggingface/datasets/pull/4564",
"diff_url": "https://github.com/huggingface/datasets/pull/4564.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4564.patch",
"merged_at": "2022-07-06T09:23... | true |
1,283,914,383 | 4,563 | Support streaming allocine dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-24T15:55:03 | 2022-06-24T16:54:57 | 2022-06-24T16:44:41 | Support streaming allocine dataset.
Fix #4562. | albertvillanova | https://github.com/huggingface/datasets/pull/4563 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4563",
"html_url": "https://github.com/huggingface/datasets/pull/4563",
"diff_url": "https://github.com/huggingface/datasets/pull/4563.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4563.patch",
"merged_at": "2022-06-24T16:44... | true |
1,283,779,557 | 4,562 | Dataset Viewer issue for allocine | closed | [
"I removed my assignment as @huggingface/datasets should be able to answer better than me\r\n",
"Let me have a look...",
"Thanks for the quick fix @albertvillanova ",
"Note that the underlying issue is that datasets containing TAR files are not streamable out of the box: they need being iterated with `dl_mana... | 2022-06-24T13:50:38 | 2022-06-27T06:39:32 | 2022-06-24T16:44:41 | ### Link
https://huggingface.co/datasets/allocine
### Description
Not sure if this is a problem with `bz2` compression, but I thought these datasets could be streamed:
```
Status code: 400
Exception: AttributeError
Message: 'TarContainedFile' object has no attribute 'readable'
```
### Owner
No | lewtun | https://github.com/huggingface/datasets/issues/4562 | null | false |
1,283,624,242 | 4,561 | Add evaluation data to acronym_identification | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-24T11:17:33 | 2022-06-27T09:37:55 | 2022-06-27T08:49:22 | null | lewtun | https://github.com/huggingface/datasets/pull/4561 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4561",
"html_url": "https://github.com/huggingface/datasets/pull/4561",
"diff_url": "https://github.com/huggingface/datasets/pull/4561.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4561.patch",
"merged_at": "2022-06-27T08:49... | true |
1,283,558,873 | 4,560 | Add evaluation metadata to imagenet-1k | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets."
] | 2022-06-24T10:12:41 | 2023-09-24T09:35:32 | 2022-09-23T09:37:03 | null | lewtun | https://github.com/huggingface/datasets/pull/4560 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4560",
"html_url": "https://github.com/huggingface/datasets/pull/4560",
"diff_url": "https://github.com/huggingface/datasets/pull/4560.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4560.patch",
"merged_at": null
} | true |
1,283,544,937 | 4,559 | Add action names in schema_guided_dstc8 dataset card | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-24T10:00:01 | 2022-06-24T10:54:28 | 2022-06-24T10:43:47 | As aseked in https://huggingface.co/datasets/schema_guided_dstc8/discussions/1, I added the action names in the dataset card | lhoestq | https://github.com/huggingface/datasets/pull/4559 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4559",
"html_url": "https://github.com/huggingface/datasets/pull/4559",
"diff_url": "https://github.com/huggingface/datasets/pull/4559.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4559.patch",
"merged_at": "2022-06-24T10:43... | true |
1,283,479,650 | 4,558 | Add evaluation metadata to wmt14 | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4558). All of your documentation changes will be reflected on that endpoint.",
"As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets."
] | 2022-06-24T09:08:54 | 2023-09-24T09:35:39 | 2022-09-23T09:36:50 | null | lewtun | https://github.com/huggingface/datasets/pull/4558 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4558",
"html_url": "https://github.com/huggingface/datasets/pull/4558",
"diff_url": "https://github.com/huggingface/datasets/pull/4558.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4558.patch",
"merged_at": null
} | true |
1,283,473,889 | 4,557 | Add evaluation metadata to wmt16 | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4557). All of your documentation changes will be reflected on that endpoint.",
"> Just to confirm: we should add this metadata via GitHub and not Hub PRs for canonical datasets right?\r\n\r\nyes :)",
"As discussed with @lewtu... | 2022-06-24T09:04:23 | 2023-09-24T09:35:49 | 2022-09-23T09:36:32 | Just to confirm: we should add this metadata via GitHub and not Hub PRs for canonical datasets right? | lewtun | https://github.com/huggingface/datasets/pull/4557 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4557",
"html_url": "https://github.com/huggingface/datasets/pull/4557",
"diff_url": "https://github.com/huggingface/datasets/pull/4557.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4557.patch",
"merged_at": null
} | true |
1,283,462,881 | 4,556 | Dataset Viewer issue for conll2003 | closed | [
"Fixed, thanks."
] | 2022-06-24T08:55:18 | 2022-06-24T09:50:39 | 2022-06-24T09:50:39 | ### Link
https://huggingface.co/datasets/conll2003/viewer/conll2003/test
### Description
Seems like a cache problem with this config / split:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/conll... | lewtun | https://github.com/huggingface/datasets/issues/4556 | null | false |
1,283,451,651 | 4,555 | Dataset Viewer issue for xtreme | closed | [
"Fixed, thanks."
] | 2022-06-24T08:46:08 | 2022-06-24T09:50:45 | 2022-06-24T09:50:45 | ### Link
https://huggingface.co/datasets/xtreme/viewer/PAN-X.de/test
### Description
There seems to be a problem with the cache of this config / split:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/data... | lewtun | https://github.com/huggingface/datasets/issues/4555 | null | false |
1,283,369,453 | 4,554 | Fix WMT dataset loading issue and docs update (Re-opened) | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-24T07:26:16 | 2022-07-08T15:39:20 | 2022-07-08T15:27:44 | This PR is a fix for #4354
Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets.
Let me know, if any additional changes are required.
Thanks | khushmeeet | https://github.com/huggingface/datasets/pull/4554 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4554",
"html_url": "https://github.com/huggingface/datasets/pull/4554",
"diff_url": "https://github.com/huggingface/datasets/pull/4554.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4554.patch",
"merged_at": "2022-07-08T15:27... | true |
1,282,779,560 | 4,553 | Stop dropping columns in to_tf_dataset() before we load batches | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq Rebasing fixed the test failures, so this should be ready to review now! There's still a failure on Win but it seems unrelated.",
"Gentle ping @lhoestq ! This is a simple fix (dropping columns after loading a batch from th... | 2022-06-23T18:21:05 | 2022-07-04T19:00:13 | 2022-07-04T18:49:01 | `to_tf_dataset()` dropped unnecessary columns before loading batches from the dataset, but this is causing problems when using a transform, because the dropped columns might be needed to compute the transform. Since there's no real way to check which columns the transform might need, we skip dropping columns and instea... | Rocketknight1 | https://github.com/huggingface/datasets/pull/4553 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4553",
"html_url": "https://github.com/huggingface/datasets/pull/4553",
"diff_url": "https://github.com/huggingface/datasets/pull/4553.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4553.patch",
"merged_at": "2022-07-04T18:49... | true |
1,282,615,646 | 4,552 | Tell users to upload on the hub directly | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks ! I updated the two remaining files"
] | 2022-06-23T15:47:52 | 2022-06-26T15:49:46 | 2022-06-26T15:39:11 | As noted in https://github.com/huggingface/datasets/pull/4534, it is still not clear that it is recommended to add datasets on the Hugging Face Hub directly instead of GitHub, so I updated some docs.
Moreover since users won't be able to get reviews from us on the Hub, I added a paragraph to tell users that they can... | lhoestq | https://github.com/huggingface/datasets/pull/4552 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4552",
"html_url": "https://github.com/huggingface/datasets/pull/4552",
"diff_url": "https://github.com/huggingface/datasets/pull/4552.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4552.patch",
"merged_at": "2022-06-26T15:39... | true |
1,282,534,807 | 4,551 | Perform hidden file check on relative data file path | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm aware of this behavior, which is tricky to solve due to fsspec's hidden file handling (see https://github.com/huggingface/datasets/issues/4115#issuecomment-1108819538). I've tested some regex patterns to address this, and they se... | 2022-06-23T14:49:11 | 2022-06-30T14:49:20 | 2022-06-30T14:38:18 | Fix #4549 | mariosasko | https://github.com/huggingface/datasets/pull/4551 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4551",
"html_url": "https://github.com/huggingface/datasets/pull/4551",
"diff_url": "https://github.com/huggingface/datasets/pull/4551.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4551.patch",
"merged_at": "2022-06-30T14:38... | true |
1,282,374,441 | 4,550 | imdb source error | closed | [
"Thanks for reporting, @Muhtasham.\r\n\r\nIndeed IMDB dataset is not accessible from yesterday, because the data is hosted on the data owners servers at Stanford (http://ai.stanford.edu/) and these are down due to a power outage originated by a fire: https://twitter.com/StanfordAILab/status/1539472302399623170?s=20... | 2022-06-23T13:02:52 | 2022-06-23T13:47:05 | 2022-06-23T13:47:04 | ## Describe the bug
imdb dataset not loading
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("imdb")
```
## Expected results
## Actual results
```bash
06/23/2022 14:45:18 - INFO - datasets.builder - Dataset not on Hf google storage. Downloading and pr... | Muhtasham | https://github.com/huggingface/datasets/issues/4550 | null | false |
1,282,312,975 | 4,549 | FileNotFoundError when passing a data_file inside a directory starting with double underscores | closed | [
"I have consistently experienced this bug on GitHub actions when bumping to `2.3.2`",
"We're working on a fix ;)"
] | 2022-06-23T12:19:24 | 2022-06-30T14:38:18 | 2022-06-30T14:38:18 | Bug experienced in the `accelerate` CI: https://github.com/huggingface/accelerate/runs/7016055148?check_suite_focus=true
This is related to https://github.com/huggingface/datasets/pull/4505 and the changes from https://github.com/huggingface/datasets/pull/4412 | lhoestq | https://github.com/huggingface/datasets/issues/4549 | null | false |
1,282,218,096 | 4,548 | Metadata.jsonl for Imagefolder is ignored if it's in a parent directory to the splits directories/do not have "{split}_" prefix | closed | [
"I agree it would be nice to support this. It doesn't fit really well in the current data_files.py, where files of each splits are separated in different folder though, maybe we have to modify a bit the logic here. \r\n\r\nOne idea would be to extend `get_patterns_in_dataset_repository` and `get_patterns_locally` t... | 2022-06-23T10:58:57 | 2022-06-30T10:15:32 | 2022-06-30T10:15:32 | If data contains a single `metadata.jsonl` file for several splits, it won't be included in a dataset's `data_files` and therefore ignored.
This happens when a directory is structured like as follows:
```
train/
file_1.jpg
file_2.jpg
test/
file_3.jpg
file_4.jpg
metadata.jsonl
```
or like as follows:... | polinaeterna | https://github.com/huggingface/datasets/issues/4548 | null | false |
1,282,160,517 | 4,547 | [CI] Fix some warnings | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"There is a CI failure only related to the missing content of the universal_dependencies dataset card, we can ignore this failure in this PR",
"good catch, I thought I resolved them all sorry",
"Alright it should be good now"
] | 2022-06-23T10:10:49 | 2022-06-28T14:10:57 | 2022-06-28T13:59:54 | There are some warnings in the CI that are annoying, I tried to remove most of them | lhoestq | https://github.com/huggingface/datasets/pull/4547 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4547",
"html_url": "https://github.com/huggingface/datasets/pull/4547",
"diff_url": "https://github.com/huggingface/datasets/pull/4547.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4547.patch",
"merged_at": "2022-06-28T13:59... | true |
1,282,093,288 | 4,546 | [CI] fixing seqeval install in ci by pinning setuptools-scm | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-23T09:24:37 | 2022-06-23T10:24:16 | 2022-06-23T10:13:44 | The latest setuptools-scm version supported on 3.6 is 6.4.2. However for some reason circleci has version 7, which doesn't work.
I fixed this by pinning the version of setuptools-scm in the circleci job
Fix https://github.com/huggingface/datasets/issues/4544 | lhoestq | https://github.com/huggingface/datasets/pull/4546 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4546",
"html_url": "https://github.com/huggingface/datasets/pull/4546",
"diff_url": "https://github.com/huggingface/datasets/pull/4546.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4546.patch",
"merged_at": "2022-06-23T10:13... | true |
1,280,899,028 | 4,545 | Make DuplicateKeysError more user friendly [For Issue #2556] | closed | [
"> Nice thanks !\r\n> \r\n> After your changes feel free to mark this PR as \"ready for review\" ;)\r\n\r\nMarking PR ready for review.\r\n\r\n@lhoestq Let me know if there is anything else required or if we are good to go ahead and merge.",
"_The documentation is not available anymore as the PR was closed or mer... | 2022-06-22T21:01:34 | 2022-06-28T09:37:06 | 2022-06-28T09:26:04 | # What does this PR do?
## Summary
*DuplicateKeysError error does not provide any information regarding the examples which have the same the key.*
*This information is very helpful for debugging the dataset generator script.*
## Additions
-
## Changes
- Changed `DuplicateKeysError Class` in `src/datase... | VijayKalmath | https://github.com/huggingface/datasets/pull/4545 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4545",
"html_url": "https://github.com/huggingface/datasets/pull/4545",
"diff_url": "https://github.com/huggingface/datasets/pull/4545.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4545.patch",
"merged_at": "2022-06-28T09:26... | true |
1,280,500,340 | 4,544 | [CI] seqeval installation fails sometimes on python 3.6 | closed | [] | 2022-06-22T16:35:23 | 2022-06-23T10:13:44 | 2022-06-23T10:13:44 | The CI sometimes fails to install seqeval, which cause the `seqeval` metric tests to fail.
The installation fails because of this error:
```
Collecting seqeval
Downloading seqeval-1.2.2.tar.gz (43 kB)
|ββββββββ | 10 kB 42.1 MB/s eta 0:00:01
|βββββββββββββββ ... | lhoestq | https://github.com/huggingface/datasets/issues/4544 | null | false |
1,280,379,781 | 4,543 | [CI] Fix upstream hub test url | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Remaining CI failures are unrelated to this fix, merging"
] | 2022-06-22T15:34:27 | 2022-06-22T16:37:40 | 2022-06-22T16:27:37 | Some tests were still using moon-stagign instead of hub-ci.
I also updated the token to use one dedicated to `datasets` | lhoestq | https://github.com/huggingface/datasets/pull/4543 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4543",
"html_url": "https://github.com/huggingface/datasets/pull/4543",
"diff_url": "https://github.com/huggingface/datasets/pull/4543.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4543.patch",
"merged_at": "2022-06-22T16:27... | true |
1,280,269,445 | 4,542 | [to_tf_dataset] Use Feather for better compatibility with TensorFlow ? | open | [
"This has so much potential to be great! Also I think you tagged some poor random dude on the internet whose name is also Joao, lol, edited that for you! ",
"cc @sayakpaul here too, since he was interested in our new approaches to converting datasets!",
"Noted and I will look into the thread in detail tomorrow ... | 2022-06-22T14:42:00 | 2022-10-11T08:45:45 | null | To have better performance in TensorFlow, it is important to provide lists of data files in supported formats. For example sharded TFRecords datasets are extremely performant. This is because tf.data can better leverage parallelism in this case, and load one file at a time in memory.
It seems that using `tensorflow_... | lhoestq | https://github.com/huggingface/datasets/issues/4542 | null | false |
1,280,161,436 | 4,541 | Fix timestamp conversion from Pandas to Python datetime in streaming mode | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"CI failures are unrelated to this PR, merging"
] | 2022-06-22T13:40:01 | 2022-06-22T16:39:27 | 2022-06-22T16:29:09 | Arrow accepts both pd.Timestamp and datetime.datetime objects to create timestamp arrays.
However a timestamp array is always converted to datetime.datetime objects.
This created an inconsistency between streaming in non-streaming. e.g. the `ett` dataset outputs datetime.datetime objects in non-streaming but pd.tim... | lhoestq | https://github.com/huggingface/datasets/pull/4541 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4541",
"html_url": "https://github.com/huggingface/datasets/pull/4541",
"diff_url": "https://github.com/huggingface/datasets/pull/4541.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4541.patch",
"merged_at": "2022-06-22T16:29... | true |
1,280,142,942 | 4,540 | Avoid splitting by` .py` for the file. | closed | [
"Hi @espoirMur, thanks for reporting.\r\n\r\nYou are right: that code line could be improved and made more generically valid.\r\n\r\nOn the other hand, I would suggest using `os.path.splitext` instead.\r\n\r\nAre you willing to open a PR? :)",
"I will have a look.. \r\n\r\nThis weekend .. ",
"@albertvillanova ... | 2022-06-22T13:26:55 | 2022-07-07T13:17:44 | 2022-07-07T13:17:44 | https://github.com/huggingface/datasets/blob/90b3a98065556fc66380cafd780af9b1814b9426/src/datasets/load.py#L272
Hello,
Thanks you for this library .
I was using it and I had one edge case. my home folder name ends with `.py` it is `/home/espoir.py` so anytime I am running the code to load a local module thi... | espoirMur | https://github.com/huggingface/datasets/issues/4540 | null | false |
1,279,779,829 | 4,539 | Replace deprecated logging.warn with logging.warning | closed | [] | 2022-06-22T08:32:29 | 2022-06-22T13:43:23 | 2022-06-22T12:51:51 | Replace `logging.warn` (deprecated in [Python 2.7, 2011](https://github.com/python/cpython/commit/04d5bc00a219860c69ea17eaa633d3ab9917409f)) with `logging.warning` (added in [Python 2.3, 2003](https://github.com/python/cpython/commit/6fa635df7aa88ae9fd8b41ae42743341316c90f7)).
* https://docs.python.org/3/library/log... | hugovk | https://github.com/huggingface/datasets/pull/4539 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4539",
"html_url": "https://github.com/huggingface/datasets/pull/4539",
"diff_url": "https://github.com/huggingface/datasets/pull/4539.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4539.patch",
"merged_at": "2022-06-22T12:51... | true |
1,279,409,786 | 4,538 | Dataset Viewer issue for Pile of Law | closed | [
"Hi @Breakend, yes β we'll propose a solution today",
"Thanks so much, I appreciate it!",
"Thanks so much for adding the docs. I was able to successfully hide the viewer using the \r\n```\r\nviewer: false\r\n```\r\nflag in the README.md of the dataset. I'm closing the issue because this is resolved. Thanks agai... | 2022-06-22T02:48:40 | 2022-06-27T07:30:23 | 2022-06-26T22:26:22 | ### Link
https://huggingface.co/datasets/pile-of-law/pile-of-law
### Description
Hi, I would like to turn off the dataset viewer for our dataset without enabling access requests. To comply with upstream dataset creator requests/licenses, we would like to make sure that the data is not indexed by search engines... | Breakend | https://github.com/huggingface/datasets/issues/4538 | null | false |
1,279,144,310 | 4,537 | Fix WMT dataset loading issue and docs update | closed | [
"The PR branch now has some commits unrelated to the changes, probably due to rebasing. Can you please close this PR and open a new one from a new branch? You can use `git cherry-pick` to preserve the relevant changes:\r\n```bash\r\ngit checkout master\r\ngit remote add upstream [email protected]:huggingface/datasets... | 2022-06-21T21:48:02 | 2022-06-24T07:05:43 | 2022-06-24T07:05:10 | This PR is a fix for #4354
Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets.
As I am on a M1 Mac, I am not able to create a virtual `dev` environment using `pip install -e ".[dev]"`. Issue is with `tensorflow-text` not... | khushmeeet | https://github.com/huggingface/datasets/pull/4537 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4537",
"html_url": "https://github.com/huggingface/datasets/pull/4537",
"diff_url": "https://github.com/huggingface/datasets/pull/4537.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4537.patch",
"merged_at": null
} | true |
1,278,734,727 | 4,536 | Properly raise FileNotFound even if the dataset is private | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-21T17:05:50 | 2022-06-28T10:46:51 | 2022-06-28T10:36:10 | `tests/test_load.py::test_load_streaming_private_dataset` was failing because the hub now returns 401 when getting the HfApi.dataset_info of a dataset without authentication. `load_dataset` was raising ConnectionError, while it should be FileNoteFoundError since it first checks for local files before checking the Hub.
... | lhoestq | https://github.com/huggingface/datasets/pull/4536 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4536",
"html_url": "https://github.com/huggingface/datasets/pull/4536",
"diff_url": "https://github.com/huggingface/datasets/pull/4536.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4536.patch",
"merged_at": "2022-06-28T10:36... | true |
1,278,365,039 | 4,535 | Add `batch_size` parameter when calling `add_faiss_index` and `add_faiss_index_from_external_arrays` | closed | [
"Also, I had a doubt while checking the code related to the indices... \r\n\r\n@lhoestq, there's a value in `config.py` named `DATASET_INDICES_FILENAME` which has the arrow extension (which I assume it should be `indices.faiss`, as the Elastic Search indices are not stored in a file, but not sure), and it's just us... | 2022-06-21T12:18:49 | 2022-06-27T16:25:09 | 2022-06-27T16:14:36 | Currently, even though the `batch_size` when adding vectors to the FAISS index can be tweaked in `FaissIndex.add_vectors()`, the function `ArrowDataset.add_faiss_index` doesn't have either the parameter `batch_size` to be propagated to the nested `FaissIndex.add_vectors` function or `*args, **kwargs`, so on, this PR ad... | alvarobartt | https://github.com/huggingface/datasets/pull/4535 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4535",
"html_url": "https://github.com/huggingface/datasets/pull/4535",
"diff_url": "https://github.com/huggingface/datasets/pull/4535.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4535.patch",
"merged_at": "2022-06-27T16:14... | true |
1,277,897,197 | 4,534 | Add `tldr_news` dataset | closed | [
"Hey @lhoestq, \r\nSorry for opening a PR, I was following the guide [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)! Thanks for the review anyway, I will follow the instructions you sent π ",
"Thanks, we will update the guide ;)"
] | 2022-06-21T05:02:43 | 2022-06-23T14:33:54 | 2022-06-21T14:21:11 | This PR aims at adding support for a news dataset: `tldr news`.
This dataset is based on the daily [tldr tech newsletter](https://tldr.tech/newsletter) and contains a `headline` as well as a `content` for every piece of news contained in a newsletter. | JulesBelveze | https://github.com/huggingface/datasets/pull/4534 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4534",
"html_url": "https://github.com/huggingface/datasets/pull/4534",
"diff_url": "https://github.com/huggingface/datasets/pull/4534.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4534.patch",
"merged_at": null
} | true |
1,277,211,490 | 4,533 | Timestamp not returned as datetime objects in streaming mode | closed | [] | 2022-06-20T17:28:47 | 2022-06-22T16:29:09 | 2022-06-22T16:29:09 | As reported in (internal) https://github.com/huggingface/datasets-server/issues/397
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("ett", name="h2", split="test", streaming=True)
>>> d = next(iter(dataset))
>>> d['start']
Timestamp('2016-07-01 00:00:00')
```
while loading in non-... | lhoestq | https://github.com/huggingface/datasets/issues/4533 | null | false |
1,277,167,129 | 4,532 | Add Video feature | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4532). All of your documentation changes will be reflected on that endpoint.",
"@nateraw do you have any plans to continue this pr? Or should I write a custom loader script to use my video dataset in the hub?",
"@fcakyon I th... | 2022-06-20T16:36:41 | 2022-11-10T16:59:51 | 2022-11-10T16:59:51 | The following adds a `Video` feature for encoding/decoding videos on the fly from in memory bytes. It uses my own `encoded-video` library which is basically `pytorchvideo`'s encoded video but with all the `torch` specific stuff stripped out. Because of that, and because the tool I used under the hood is not very mature... | nateraw | https://github.com/huggingface/datasets/pull/4532 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4532",
"html_url": "https://github.com/huggingface/datasets/pull/4532",
"diff_url": "https://github.com/huggingface/datasets/pull/4532.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4532.patch",
"merged_at": null
} | true |
1,277,054,172 | 4,531 | Dataset Viewer issue for CSV datasets | closed | [
"this should now be fixed",
"Confirmed, it's fixed now. Thanks for reporting, and thanks @coyotte508 for fixing it\r\n\r\n<img width=\"1123\" alt=\"Capture dβeΜcran 2022-06-21 aΜ 10 28 05\" src=\"https://user-images.githubusercontent.com/1676121/174753833-1b453a5a-6a90-4717-bca1-1b5fc6b75e4a.png\">\r\n"
] | 2022-06-20T14:56:24 | 2022-06-21T08:28:46 | 2022-06-21T08:28:27 | ### Link
https://huggingface.co/datasets/scikit-learn/breast-cancer-wisconsin
### Description
I'm populating CSV datasets [here](https://huggingface.co/scikit-learn) but the viewer is not enabled and it looks for a dataset loading script, the datasets aren't on queue as well.
You can replicate the problem by sim... | merveenoyan | https://github.com/huggingface/datasets/issues/4531 | null | false |
1,276,884,962 | 4,530 | Add AudioFolder packaged loader | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq @mariosasko I don't know what to do with the test, do you have any ideas? :)",
"also it's passed in `pyarrow_latest_WIN`",
"If the error only happens on 3.6, maybe #4460 can help ^^' It seems to work in 3.7 on the window... | 2022-06-20T12:54:02 | 2022-08-22T14:36:49 | 2022-08-22T14:20:40 | will close #3964
AudioFolder is almost identical to ImageFolder except for inferring labels is not the default behavior (`drop_labels` is set to True in config), the option of inferring them is preserved though.
The weird thing is happening with the `test_data_files_with_metadata_and_archives` when `streaming` i... | polinaeterna | https://github.com/huggingface/datasets/pull/4530 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4530",
"html_url": "https://github.com/huggingface/datasets/pull/4530",
"diff_url": "https://github.com/huggingface/datasets/pull/4530.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4530.patch",
"merged_at": "2022-08-22T14:20... | true |
1,276,729,303 | 4,529 | Ecoset | closed | [
"Hi! Very cool dataset! I answered your questions on the forum. Also, feel free to comment `#self-assign` on this issue to self-assign it.",
"The dataset lives on the Hub [here](https://huggingface.co/datasets/kietzmannlab/ecoset), so I'm closing this issue.",
"Hey There, thanks for closing π€ \r\n\r\nForgot th... | 2022-06-20T10:39:34 | 2023-10-26T09:12:32 | 2023-10-04T18:19:52 | ## Adding a Dataset
- **Name:** *Ecoset*
- **Description:** *https://www.kietzmannlab.org/ecoset/*
- **Paper:** *https://doi.org/10.1073/pnas.2011417118*
- **Data:** *https://codeocean.com/capsule/9570390/tree/v1*
- **Motivation:**
**Ecoset** was created as a clean and ecologically valid alternative to **Imagen... | DiGyt | https://github.com/huggingface/datasets/issues/4529 | null | false |
1,276,679,155 | 4,528 | Memory leak when iterating a Dataset | closed | [
"Is someone assigned to this issue?",
"The same issue is being debugged here: https://github.com/huggingface/datasets/issues/4883\r\n",
"Here is a modified repro example that makes it easier to see the leak:\r\n\r\n```\r\n$ cat ds2.py\r\nimport gc, sys\r\nimport time\r\nfrom datasets import load_dataset\r\nimpo... | 2022-06-20T10:03:14 | 2022-09-12T08:51:39 | 2022-09-12T08:51:39 | e## Describe the bug
It seems that memory never gets freed after iterating a `Dataset` (using `.map()` or a simple `for` loop)
## Steps to reproduce the bug
```python
import gc
import logging
import time
import pyarrow
from datasets import load_dataset
from tqdm import trange
import os, psutil
logging.ba... | NouamaneTazi | https://github.com/huggingface/datasets/issues/4528 | null | false |
1,276,583,536 | 4,527 | Dataset Viewer issue for vadis/sv-ident | closed | [
"Fixed, thanks!\r\n![Uploading Capture dβeΜcran 2022-06-21 aΜ 18.42.40.pngβ¦]()\r\n\r\n"
] | 2022-06-20T08:47:42 | 2022-06-21T16:42:46 | 2022-06-21T16:42:45 | ### Link
https://huggingface.co/datasets/vadis/sv-ident
### Description
The dataset preview does not work:
```
Server Error
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
```
However, the dataset is streamable and works locally:
```python
In [1]: from dataset... | albertvillanova | https://github.com/huggingface/datasets/issues/4527 | null | false |
1,276,580,185 | 4,526 | split cache used when processing different split | open | [
"I was not able to reproduce this behavior (I tried without using pytorch lightning though, since I don't know what code you ran in pytorch lightning to get this).\r\n\r\nIf you can provide a MWE that would be perfect ! :)",
"Hi, I think the issue happened because I was loading datasets under an `if` ... `else` s... | 2022-06-20T08:44:58 | 2022-06-28T14:04:58 | null | ## Describe the bug`
```
ds1 = load_dataset('squad', split='validation')
ds2 = load_dataset('squad', split='train')
ds1 = ds1.map(some_function)
ds2 = ds2.map(some_function)
assert ds1 == ds2
```
This happens when ds1 and ds2 are created in `pytorch_lightning.DataModule` through
```
class myDataModule:
... | gpucce | https://github.com/huggingface/datasets/issues/4526 | null | false |
1,276,491,386 | 4,525 | Out of memory error on workers while running Beam+Dataflow | closed | [
"Some naive ideas to cope with this:\r\n- enable more RAM on each worker\r\n- force the spanning of more workers\r\n- others?",
"@albertvillanova We were finally able to process the full NQ dataset on our machines using 600 gb with 5 workers. Maybe these numbers will work for you as well.",
"Thanks a lot for th... | 2022-06-20T07:28:12 | 2024-10-09T16:09:50 | 2024-10-09T16:09:50 | ## Describe the bug
While running the preprocessing of the natural_question dataset (see PR #4368), there is an issue for the "default" config (train+dev files).
Previously we ran the preprocessing for the "dev" config (only dev files) with success.
Train data files are larger than dev ones and apparently worker... | albertvillanova | https://github.com/huggingface/datasets/issues/4525 | null | false |
1,275,909,186 | 4,524 | Downloading via Apache Pipeline, client cancelled (org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException) | open | [
"Hi @dan-the-meme-man, thanks for reporting.\r\n\r\nWe are investigating a similar issue but with Beam+Dataflow (instead of Beam+Flink): \r\n- #4525\r\n\r\nIn order to go deeper into the root cause, we need as much information as possible: logs from the main process + logs from the workers are very informative.\r\n... | 2022-06-18T23:36:45 | 2022-06-21T00:38:20 | null | ## Describe the bug
When downloading some `wikipedia` languages (in particular, I'm having a hard time with Spanish, Cebuano, and Russian) via FlinkRunner, I encounter the exception in the title. I have been playing with package versions a lot, because unfortunately, the different dependencies required by these packag... | ddegenaro | https://github.com/huggingface/datasets/issues/4524 | null | false |
1,275,002,639 | 4,523 | Update download url and improve card of `cats_vs_dogs` dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-17T12:59:44 | 2022-06-21T14:23:26 | 2022-06-21T14:13:08 | Improve the download URL (reported here: https://huggingface.co/datasets/cats_vs_dogs/discussions/1), remove the `image_file_path` column (not used in Transformers, so it should be safe) and add more info to the card. | mariosasko | https://github.com/huggingface/datasets/pull/4523 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4523",
"html_url": "https://github.com/huggingface/datasets/pull/4523",
"diff_url": "https://github.com/huggingface/datasets/pull/4523.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4523.patch",
"merged_at": "2022-06-21T14:13... | true |
1,274,929,328 | 4,522 | Try to reduce the number of datasets that require manual download | open | [] | 2022-06-17T11:42:03 | 2022-06-17T11:52:48 | null | > Currently, 41 canonical datasets require manual download. I checked their scripts and I'm pretty sure this number can be reduced to β 30 by not relying on bash scripts to download data, hosting data directly on the Hub when the license permits, etc. Then, we will mostly be left with datasets with restricted access, w... | severo | https://github.com/huggingface/datasets/issues/4522 | null | false |
1,274,919,437 | 4,521 | Datasets method `.map` not hashing | closed | [
"Fix posted: https://github.com/huggingface/datasets/issues/4506#issuecomment-1157417219",
"Didn't realize it's a bug when I asked the question yesterday! Free free to post an answer if you are sure the cause has been addressed.\r\n\r\nhttps://stackoverflow.com/questions/72664827/can-pickle-dill-foo-but-not-lambd... | 2022-06-17T11:31:10 | 2022-08-04T12:08:16 | 2022-06-28T13:23:05 | ## Describe the bug
Datasets method `.map` not hashing, even with an empty no-op function
## Steps to reproduce the bug
```python
from datasets import load_dataset
# download 9MB dummy dataset
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean")
def prepare_dataset(batch):
return(b... | sanchit-gandhi | https://github.com/huggingface/datasets/issues/4521 | null | false |
1,274,879,180 | 4,520 | Failure to hash `dataclasses` - results in functions that cannot be hashed or cached in `.map` | closed | [
"I think this has been fixed by #4516, let me know if you encounter this again :)\r\n\r\nI re-ran your code in 3.7 and 3.9 and it works fine",
"Thank you!"
] | 2022-06-17T10:47:17 | 2022-06-28T14:47:17 | 2022-06-28T14:04:29 | Dataclasses cannot be hashed. As a result, they cannot be hashed or cached if used in the `.map` method. Dataclasses are used extensively in Transformers examples scripts: (c.f. [CTC example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py)). Since... | sanchit-gandhi | https://github.com/huggingface/datasets/issues/4520 | null | false |
1,274,110,623 | 4,519 | Create new sections for audio and vision in guides | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Ready for review!\r\n\r\nThe `toctree` is a bit longer now with the sections. I think if we keep the audio/vision/text/dataset repository sections collapsed by default, and keep the general usage expanded, it may look a little cleane... | 2022-06-16T21:38:24 | 2022-07-07T15:36:37 | 2022-07-07T15:24:58 | This PR creates separate sections in the guides for audio, vision, text, and general usage so it is easier for users to find loading, processing, or sharing guides specific to the dataset type they're working with. It'll also allow us to scale the docs to additional dataset types - like time series, tabular, etc. - whi... | stevhliu | https://github.com/huggingface/datasets/pull/4519 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4519",
"html_url": "https://github.com/huggingface/datasets/pull/4519",
"diff_url": "https://github.com/huggingface/datasets/pull/4519.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4519.patch",
"merged_at": "2022-07-07T15:24... | true |
1,274,010,628 | 4,518 | Patch tests for hfh v0.8.0 | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-16T19:45:32 | 2022-06-17T16:15:57 | 2022-06-17T16:06:07 | This PR patches testing utilities that would otherwise fail with hfh v0.8.0. | LysandreJik | https://github.com/huggingface/datasets/pull/4518 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4518",
"html_url": "https://github.com/huggingface/datasets/pull/4518",
"diff_url": "https://github.com/huggingface/datasets/pull/4518.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4518.patch",
"merged_at": "2022-06-17T16:06... | true |
1,273,960,476 | 4,517 | Add tags for task_ids:summarization-* and task_categories:summarization* | closed | [
"Associated community discussion is [here](https://huggingface.co/datasets/aeslc/discussions/1).\r\nPaper referenced in the `dataset_infos.json` is [here](https://arxiv.org/pdf/1906.03497.pdf). It mentions the _email-subject-generation_ task, which is not a tag mentioned in any other dataset so it was not added in... | 2022-06-16T18:52:25 | 2022-07-08T15:14:23 | 2022-07-08T15:02:31 | yaml header at top of README.md file was edited to add task tags because I couldn't find the existing tags in the json
separate Pull Request will modify dataset_infos.json to add these tags
The Enron dataset (dataset id aeslc) is only tagged with:
arxiv:1906.03497'
languages:en
pretty_name:AESLC
... | hobson | https://github.com/huggingface/datasets/pull/4517 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4517",
"html_url": "https://github.com/huggingface/datasets/pull/4517",
"diff_url": "https://github.com/huggingface/datasets/pull/4517.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4517.patch",
"merged_at": "2022-07-08T15:02... | true |
1,273,825,640 | 4,516 | Fix hashing for python 3.9 | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"What do you think @albertvillanova ?"
] | 2022-06-16T16:42:31 | 2022-06-28T13:33:46 | 2022-06-28T13:23:06 | In python 3.9, pickle hashes the `glob_ids` dictionary in addition to the `globs` of a function.
Therefore the test at `tests/test_fingerprint.py::RecurseDumpTest::test_recurse_dump_for_function_with_shuffled_globals` is currently failing for python 3.9
To make hashing deterministic when the globals are not in th... | lhoestq | https://github.com/huggingface/datasets/pull/4516 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4516",
"html_url": "https://github.com/huggingface/datasets/pull/4516",
"diff_url": "https://github.com/huggingface/datasets/pull/4516.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4516.patch",
"merged_at": "2022-06-28T13:23... | true |
1,273,626,131 | 4,515 | Add uppercased versions of image file extensions for automatic module inference | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-16T14:14:49 | 2022-06-16T17:21:53 | 2022-06-16T17:11:41 | Adds the uppercased versions of the image file extensions to the supported extensions.
Another approach would be to call `.lower()` on extensions while resolving data files, but uppercased extensions are not something we want to encourage out of the box IMO unless they are commonly used (as they are in the vision d... | mariosasko | https://github.com/huggingface/datasets/pull/4515 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4515",
"html_url": "https://github.com/huggingface/datasets/pull/4515",
"diff_url": "https://github.com/huggingface/datasets/pull/4515.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4515.patch",
"merged_at": "2022-06-16T17:11... | true |
1,273,505,230 | 4,514 | Allow .JPEG as a file extension | closed | [
"Hi, thanks for reporting! I've opened a PR with the fix.",
"Wow, that was quick! Thank you very much π "
] | 2022-06-16T12:36:20 | 2022-06-20T08:18:46 | 2022-06-16T17:11:40 | ## Describe the bug
When loading image data, HF datasets seems to recognize `.jpg` and `.jpeg` file extensions, but not e.g. .JPEG. As the naming convention .JPEG is used in important datasets such as imagenet, I would welcome if according extensions like .JPEG or .JPG would be allowed.
## Steps to reproduce the bu... | DiGyt | https://github.com/huggingface/datasets/issues/4514 | null | false |
1,273,450,338 | 4,513 | Update Google Cloud Storage documentation and add Azure Blob Storage example | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @stevhliu, I've kept the `>>>` before all the in-line code comments as it was done like that in the default S3 example that was already there, I assume that it's done like that just for readiness, let me know whether we should rem... | 2022-06-16T11:46:09 | 2022-06-23T17:05:11 | 2022-06-23T16:54:59 | While I was going through the π€ Datasets documentation of the Cloud storage filesystems at https://huggingface.co/docs/datasets/filesystems, I realized that the Google Cloud Storage documentation could be improved e.g. bullet point says "Load your dataset" when the actual call was to "Save your dataset", in-line code ... | alvarobartt | https://github.com/huggingface/datasets/pull/4513 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4513",
"html_url": "https://github.com/huggingface/datasets/pull/4513",
"diff_url": "https://github.com/huggingface/datasets/pull/4513.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4513.patch",
"merged_at": "2022-06-23T16:54... | true |
1,273,378,129 | 4,512 | Add links to vision tasks scripts in ADD_NEW_DATASET template | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI failure is unrelated to the PR's changes. Merging."
] | 2022-06-16T10:35:35 | 2022-07-08T14:07:50 | 2022-07-08T13:56:23 | Add links to vision dataset scripts in the ADD_NEW_DATASET template. | mariosasko | https://github.com/huggingface/datasets/pull/4512 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4512",
"html_url": "https://github.com/huggingface/datasets/pull/4512",
"diff_url": "https://github.com/huggingface/datasets/pull/4512.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4512.patch",
"merged_at": "2022-07-08T13:56... | true |
1,273,336,874 | 4,511 | Support all negative values in ClassLabel | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for this fix! I'm not sure what the release timeline is, but FYI #4508 is a breaking issue for transformer token classification using Trainer and PyTorch. PyTorch defaults to -100 as the ignored label for [negative log loss](h... | 2022-06-16T09:59:39 | 2025-07-23T18:38:15 | 2022-06-16T13:54:07 | We usually use -1 to represent a missing label, but we should also support any negative values (some users use -100 for example). This is a regression from `datasets` 2.3
Fix https://github.com/huggingface/datasets/issues/4508 | lhoestq | https://github.com/huggingface/datasets/pull/4511 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4511",
"html_url": "https://github.com/huggingface/datasets/pull/4511",
"diff_url": "https://github.com/huggingface/datasets/pull/4511.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4511.patch",
"merged_at": "2022-06-16T13:54... | true |
1,273,260,396 | 4,510 | Add regression test for `ArrowWriter.write_batch` when batch is empty | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"As mentioned by @lhoestq, the current behavior is correct and we should not expect batches with different columns, in that case, the if should fail, as the values of the batch can be empty, but not the actual `batch_examples` value."... | 2022-06-16T08:53:51 | 2022-06-16T12:38:02 | 2022-06-16T12:28:19 | As spotted by @cccntu in #4502, there's a logic bug in `ArrowWriter.write_batch` as the if-statement to handle the empty batches as detailed in the docstrings of the function ("Ignores the batch if it appears to be empty, preventing a potential schema update of unknown types."), the current if-statement is not handling... | alvarobartt | https://github.com/huggingface/datasets/pull/4510 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4510",
"html_url": "https://github.com/huggingface/datasets/pull/4510",
"diff_url": "https://github.com/huggingface/datasets/pull/4510.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4510.patch",
"merged_at": "2022-06-16T12:28... | true |
1,273,227,760 | 4,509 | Support skipping Parquet to Arrow conversion when using Beam | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4509). All of your documentation changes will be reflected on that endpoint.",
"When #4724 is merged, we can just pass `file_format=\"parquet\"` to `download_and_prepare` and it will output parquet fiels without converting to a... | 2022-06-16T08:25:38 | 2022-11-07T16:22:41 | 2022-11-07T16:22:41 | null | albertvillanova | https://github.com/huggingface/datasets/pull/4509 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4509",
"html_url": "https://github.com/huggingface/datasets/pull/4509",
"diff_url": "https://github.com/huggingface/datasets/pull/4509.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4509.patch",
"merged_at": null
} | true |
1,272,718,921 | 4,508 | cast_storage method from datasets.features | closed | [
"Hi! We've recently added a check to the `ClassLabel` type to ensure the values are in the valid label range `-1, 0, ..., num_classes-1` (-1 is used for missing values). The error in your case happens only if the `labels` column is of type `Sequence(ClassLabel(...))` before the `map` call and can be avoided by call... | 2022-06-15T20:47:22 | 2022-06-16T13:54:07 | 2022-06-16T13:54:07 | ## Describe the bug
A bug occurs when mapping a function to a dataset object. I ran the same code with the same data yesterday and it worked just fine. It works when i run locally on an old version of datasets.
## Steps to reproduce the bug
Steps are:
- load whatever datset
- write a preprocessing function such ... | romainremyb | https://github.com/huggingface/datasets/issues/4508 | null | false |
1,272,615,932 | 4,507 | How to let `load_dataset` return a `Dataset` instead of `DatasetDict` in customized loading script | closed | [
"Hi @liyucheng09.\r\n\r\nUsers can pass the `split` parameter to `load_dataset`. For example, if your split name is \"train\",\r\n```python\r\nds = load_dataset(\"dataset_name\", split=\"train\")\r\n```\r\nwill return a `Dataset` instance.",
"@albertvillanova Thanks! I can't believe I didn't know this feature til... | 2022-06-15T18:56:34 | 2022-06-16T10:40:08 | 2022-06-16T10:40:08 | If the dataset does not need splits, i.e., no training and validation split, more like a table. How can I let the `load_dataset` function return a `Dataset` object directly rather than return a `DatasetDict` object with only one key-value pair.
Or I can paraphrase the question in the following way: how to skip `_spl... | liyucheng09 | https://github.com/huggingface/datasets/issues/4507 | null | false |
1,272,516,895 | 4,506 | Failure to hash (and cache) a `.map(...)` (almost always) - using this method can produce incorrect results | closed | [
"Important info:\r\n\r\nAs hashes are generated randomly for functions, it leads to **false identifying some results as already hashed** (mapping function is not executed after a method update) when there's a `pytorch_lightning.seed_everything(123)`",
"@lhoestq\r\nseems like quite critical stuff for me, if I'm no... | 2022-06-15T17:11:31 | 2023-02-16T03:14:32 | 2022-06-28T13:23:05 | ## Describe the bug
Sometimes I get messages about not being able to hash a method:
`Parameter 'function'=<function StupidDataModule._separate_speaker_id_from_dialogue at 0x7f1b27180d30> of the transform datasets.arrow_dataset.Dataset.
_map_single couldn't be hashed properly, a random hash was used instead. Make sur... | DrMatters | https://github.com/huggingface/datasets/issues/4506 | null | false |
1,272,477,226 | 4,505 | Fix double dots in data files | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI fails are unrelated to this PR (apparently something related to `seqeval` on windows) - merging :)"
] | 2022-06-15T16:31:04 | 2022-06-15T17:15:58 | 2022-06-15T17:05:53 | As mentioned in https://github.com/huggingface/transformers/pull/17715 `data_files` can't find a file if the path contains double dots `/../`. This has been introduced in https://github.com/huggingface/datasets/pull/4412, by trying to ignore hidden files and directories (i.e. if they start with a dot)
I fixed this a... | lhoestq | https://github.com/huggingface/datasets/pull/4505 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4505",
"html_url": "https://github.com/huggingface/datasets/pull/4505",
"diff_url": "https://github.com/huggingface/datasets/pull/4505.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4505.patch",
"merged_at": "2022-06-15T17:05... | true |
1,272,418,480 | 4,504 | Can you please add the Stanford dog dataset? | closed | [
"would you like to give it a try, @dgrnd4? (maybe with the help of the dataset author?)",
"@julien-c i am sorry but I have no idea about how it works: can I add the dataset by myself, following \"instructions to add a new dataset\"?\r\nCan I add a dataset even if it's not mine? (it's public in the link that I wro... | 2022-06-15T15:39:35 | 2024-12-09T15:44:11 | 2023-10-18T18:55:30 | ## Adding a Dataset
- **Name:** *Stanford dog dataset*
- **Description:** *The dataset is about 120 classes for a total of 20.580 images. You can find the dataset here http://vision.stanford.edu/aditya86/ImageNetDogs/*
- **Paper:** *http://vision.stanford.edu/aditya86/ImageNetDogs/*
- **Data:** *[link to the Github... | dgrnd4 | https://github.com/huggingface/datasets/issues/4504 | null | false |
1,272,367,055 | 4,503 | Refactor and add metadata to fever dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"But this is somehow fever v3 dataset (see this link https://fever.ai/ under the dropdown menu called Datasets). Our fever dataset already contains v1 and v2 configs. Then, I added this as if v3 config (but named feverous instead of v... | 2022-06-15T14:59:47 | 2022-07-06T11:54:15 | 2022-07-06T11:41:30 | Related to: #4452 and #3792. | albertvillanova | https://github.com/huggingface/datasets/pull/4503 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4503",
"html_url": "https://github.com/huggingface/datasets/pull/4503",
"diff_url": "https://github.com/huggingface/datasets/pull/4503.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4503.patch",
"merged_at": "2022-07-06T11:41... | true |
1,272,353,700 | 4,502 | Logic bug in arrow_writer? | closed | [
"Hi @cccntu you're right, as when `batch_examples={}` the current if-statement won't be triggered as the condition won't be satisfied, I'll prepare a PR to address it as well as add the regression tests so that this issue is handled properly.",
"Hi @alvarobartt ,\r\nThanks for answering. Do you know when and why ... | 2022-06-15T14:50:00 | 2022-06-18T15:15:51 | 2022-06-18T15:15:51 | https://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L475-L488
I got some error, and I found it's caused by `batch_examples` being `{}`. I wonder if the code should be as follows:
```
- if batch_examples and len(next(iter(batch_examples.values())... | changjonathanc | https://github.com/huggingface/datasets/issues/4502 | null | false |
1,272,300,646 | 4,501 | Corrected broken links in doc | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-15T14:12:17 | 2022-06-15T15:11:05 | 2022-06-15T15:00:56 | null | clefourrier | https://github.com/huggingface/datasets/pull/4501 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4501",
"html_url": "https://github.com/huggingface/datasets/pull/4501",
"diff_url": "https://github.com/huggingface/datasets/pull/4501.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4501.patch",
"merged_at": "2022-06-15T15:00... | true |
1,272,281,992 | 4,500 | Add `concatenate_datasets` for iterable datasets | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks ! I addressed your comments :)\r\n\r\n> There is a slight difference in concatenate_datasets between the version for map-style datasets and the one for iterable datasets\r\n\r\nIndeed, here is what I did to fix this:\r\n\r\n- ... | 2022-06-15T13:58:50 | 2022-06-28T21:25:39 | 2022-06-28T21:15:04 | `concatenate_datasets` currently only supports lists of `datasets.Dataset`, not lists of `datasets.IterableDataset` like `interleave_datasets`
Fix https://github.com/huggingface/datasets/issues/2564
I also moved `_interleave_map_style_datasets` from combine.py to arrow_dataset.py, since the logic depends a lot on... | lhoestq | https://github.com/huggingface/datasets/pull/4500 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4500",
"html_url": "https://github.com/huggingface/datasets/pull/4500",
"diff_url": "https://github.com/huggingface/datasets/pull/4500.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4500.patch",
"merged_at": "2022-06-28T21:15... | true |
1,272,118,162 | 4,499 | fix ETT m1/m2 test/val dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thansk for the fix ! Can you regenerate the datasets_infos.json please ? This way it will update the expected number of examples in the test and val splits",
"ah yes!"
] | 2022-06-15T11:51:02 | 2022-06-15T14:55:56 | 2022-06-15T14:45:13 | https://huggingface.co/datasets/ett/discussions/1 | kashif | https://github.com/huggingface/datasets/pull/4499 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4499",
"html_url": "https://github.com/huggingface/datasets/pull/4499",
"diff_url": "https://github.com/huggingface/datasets/pull/4499.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4499.patch",
"merged_at": "2022-06-15T14:45... | true |
1,272,100,549 | 4,498 | WER and CER > 1 | closed | [
"WER can have values bigger than 1.0, this is expected when there are too many insertions\r\n\r\nFrom [wikipedia](https://en.wikipedia.org/wiki/Word_error_rate):\r\n> Note that since N is the number of words in the reference, the word error rate can be larger than 1.0"
] | 2022-06-15T11:35:12 | 2022-06-15T16:38:05 | 2022-06-15T16:38:05 | ## Describe the bug
It seems that in some cases in which the `prediction` is longer than the `reference` we may have word/character error rate higher than 1 which is a bit odd.
If it's a real bug I think I can solve it with a PR changing [this](https://github.com/huggingface/datasets/blob/master/metrics/wer/wer.py#... | sadrasabouri | https://github.com/huggingface/datasets/issues/4498 | null | false |
1,271,964,338 | 4,497 | Re-add download_manager module in utils | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the fix.\r\n\r\nI'm wondering how this fixes backward compatibility...\r\n\r\nExecuting this code:\r\n```python\r\nfrom datasets.utils.download_manager import DownloadMode\r\n```\r\nwe will have\r\n```python\r\nDownloadMod... | 2022-06-15T09:44:33 | 2022-06-15T10:33:28 | 2022-06-15T10:23:44 | https://github.com/huggingface/datasets/pull/4384 moved `datasets.utils.download_manager` to `datasets.download.download_manager`
This breaks `evaluate` which imports `DownloadMode` from `datasets.utils.download_manager`
This PR re-adds `datasets.utils.download_manager` without circular imports.
We could also... | lhoestq | https://github.com/huggingface/datasets/pull/4497 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4497",
"html_url": "https://github.com/huggingface/datasets/pull/4497",
"diff_url": "https://github.com/huggingface/datasets/pull/4497.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4497.patch",
"merged_at": "2022-06-15T10:23... | true |
1,271,945,704 | 4,496 | Replace `assertEqual` with `assertTupleEqual` in unit tests for verbosity | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"FYI I used the following regex to look for the `assertEqual` statements where the assertion was being done over a Tuple: `self.assertEqual(.*, \\(.*,)(\\)\\))$`, hope this is useful!"
] | 2022-06-15T09:29:16 | 2022-07-07T17:06:51 | 2022-07-07T16:55:48 | As detailed in #4419 and as suggested by @mariosasko, we could replace the `assertEqual` assertions with `assertTupleEqual` when the assertion is between Tuples, in order to make the tests more verbose. | alvarobartt | https://github.com/huggingface/datasets/pull/4496 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4496",
"html_url": "https://github.com/huggingface/datasets/pull/4496",
"diff_url": "https://github.com/huggingface/datasets/pull/4496.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4496.patch",
"merged_at": "2022-07-07T16:55... | true |
1,271,851,025 | 4,495 | Fix patching module that doesn't exist | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-15T08:17:50 | 2022-06-15T16:40:49 | 2022-06-15T08:54:09 | Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true
When trying to patch `scipy.io.loadmat`:
```python
ModuleNotFoundError: No module named 'scipy'
```
Instead it shouldn't raise an error and do nothing
Bug introduced by #4375
Fix https://github.com/hugging... | lhoestq | https://github.com/huggingface/datasets/pull/4495 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4495",
"html_url": "https://github.com/huggingface/datasets/pull/4495",
"diff_url": "https://github.com/huggingface/datasets/pull/4495.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4495.patch",
"merged_at": "2022-06-15T08:54... | true |
1,271,850,599 | 4,494 | Patching fails for modules that are not installed or don't exist | closed | [] | 2022-06-15T08:17:29 | 2022-06-15T08:54:09 | 2022-06-15T08:54:09 | Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true
When trying to patch `scipy.io.loadmat`:
```python
ModuleNotFoundError: No module named 'scipy'
```
Instead it shouldn't raise an error and do nothing
We use patching to extend such functions to support remot... | lhoestq | https://github.com/huggingface/datasets/issues/4494 | null | false |
1,271,306,385 | 4,493 | Add `@transmit_format` in `flatten` | closed | [
"@mariosasko please let me know whether we need to include some sort of tests to make sure that the decorator is working as expected. Thanks! π€ ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4493). All of your documentation changes will be reflected on that endpoint.",
... | 2022-06-14T20:09:09 | 2022-09-27T11:37:25 | 2022-09-27T10:48:54 | As suggested by @mariosasko in https://github.com/huggingface/datasets/pull/4411, we should include the `@transmit_format` decorator to `flatten`, `rename_column`, and `rename_columns` so as to ensure that the value of `_format_columns` in an `ArrowDataset` is properly updated.
**Edit**: according to @mariosasko com... | alvarobartt | https://github.com/huggingface/datasets/pull/4493 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4493",
"html_url": "https://github.com/huggingface/datasets/pull/4493",
"diff_url": "https://github.com/huggingface/datasets/pull/4493.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4493.patch",
"merged_at": null
} | true |
1,271,112,497 | 4,492 | Pin the revision in imagenet download links | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-14T17:15:17 | 2022-06-14T17:35:13 | 2022-06-14T17:25:45 | Use the commit sha in the data files URLs of the imagenet-1k download script, in case we want to restructure the data files in the future. For example we may split it into many more shards for better paralellism.
cc @mariosasko | lhoestq | https://github.com/huggingface/datasets/pull/4492 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4492",
"html_url": "https://github.com/huggingface/datasets/pull/4492",
"diff_url": "https://github.com/huggingface/datasets/pull/4492.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4492.patch",
"merged_at": "2022-06-14T17:25... | true |
1,270,803,822 | 4,491 | Dataset Viewer issue for Pavithree/test | closed | [
"This issue can be resolved according to this post https://stackoverflow.com/questions/70566660/parquet-with-null-columns-on-pyarrow. It looks like first data entry in the json file must not have any null values as pyarrow uses this first file to infer schema for entire dataset."
] | 2022-06-14T13:23:10 | 2022-06-14T14:37:21 | 2022-06-14T14:34:33 | ### Link
https://huggingface.co/datasets/Pavithree/test
### Description
I have extracted the subset of original eli5 dataset found at hugging face. However, while loading the dataset It throws ArrowNotImplementedError: Unsupported cast from string to null using function cast_null error. Is there anything missi... | Pavithree | https://github.com/huggingface/datasets/issues/4491 | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.