id int64 599M 3.48B | number int64 1 7.8k | title stringlengths 1 290 | state stringclasses 2
values | comments listlengths 0 30 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-10-05 06:37:50 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-10-05 10:32:43 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-10-01 13:56:03 ⌀ | body stringlengths 0 228k ⌀ | user stringlengths 3 26 | html_url stringlengths 46 51 | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,306,788,322 | 4,693 | update `samsum` script | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"We are closing PRs to dataset scripts because we are moving them to the Hub.\r\n\r\nThanks anyway.\r\n\r\n"
] | 2022-07-16T11:53:05 | 2022-09-23T11:40:11 | 2022-09-23T11:37:57 | update `samsum` script after #4672 was merged (citation is also updated) | bhavitvyamalik | https://github.com/huggingface/datasets/pull/4693 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4693",
"html_url": "https://github.com/huggingface/datasets/pull/4693",
"diff_url": "https://github.com/huggingface/datasets/pull/4693.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4693.patch",
"merged_at": null
} | true |
1,306,609,680 | 4,692 | Unable to cast a column with `Image()` by using the `cast_column()` feature | closed | [
"Hi, thanks for reporting! A PR (https://github.com/huggingface/datasets/pull/4614) has already been opened to address this issue."
] | 2022-07-15T22:56:03 | 2022-07-19T13:36:24 | 2022-07-19T13:36:24 | ## Describe the bug
A clear and concise description of what the bug is.
When I create a dataset, then add a column to the created dataset through the `dataset.add_column` feature and then try to cast a column of the dataset (this column contains image paths) with `Image()` by using the `cast_column()` feature, I ge... | skrishnan99 | https://github.com/huggingface/datasets/issues/4692 | null | false |
1,306,389,656 | 4,691 | Dataset Viewer issue for rajistics/indian_food_images | closed | [
"Hi, thanks for reporting. I triggered a refresh of the preview for this dataset, and it works now. I'm not sure what occurred.\r\n<img width=\"1019\" alt=\"Capture d’écran 2022-07-18 à 11 01 52\" src=\"https://user-images.githubusercontent.com/1676121/179541327-f62ecd5e-a18a-4d91-b316-9e2ebde77a28.png\">\r\n\r\n... | 2022-07-15T19:03:15 | 2022-07-18T15:02:03 | 2022-07-18T15:02:03 | ### Link
https://huggingface.co/datasets/rajistics/indian_food_images/viewer/rajistics--indian_food_images/train
### Description
I have a train/test split in my dataset
<img width="410" alt="Screen Shot 2022-07-15 at 11 44 42 AM" src="https://user-images.githubusercontent.com/6808012/179293215-7b419ec3-3527-46f2-8... | rajshah4 | https://github.com/huggingface/datasets/issues/4691 | null | false |
1,306,321,975 | 4,690 | Refactor base extractors | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-15T17:47:48 | 2022-07-18T08:46:56 | 2022-07-18T08:34:49 | This PR:
- Refactors base extractors as subclasses of `BaseExtractor`:
- this is an abstract class defining the interface with:
- `is_extractable`: abstract class method
- `extract`: abstract static method
- Implements abstract `MagicNumberBaseExtractor` (as subclass of `BaseExtractor`):
- this has a... | albertvillanova | https://github.com/huggingface/datasets/pull/4690 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4690",
"html_url": "https://github.com/huggingface/datasets/pull/4690",
"diff_url": "https://github.com/huggingface/datasets/pull/4690.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4690.patch",
"merged_at": "2022-07-18T08:34... | true |
1,306,230,203 | 4,689 | Test extractors for all compression formats | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-15T16:29:55 | 2022-07-15T17:47:02 | 2022-07-15T17:35:24 | This PR:
- Adds all compression formats to `test_extractor`
- Tests each base extractor for all compression formats
Note that all compression formats are tested except "rar". | albertvillanova | https://github.com/huggingface/datasets/pull/4689 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4689",
"html_url": "https://github.com/huggingface/datasets/pull/4689",
"diff_url": "https://github.com/huggingface/datasets/pull/4689.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4689.patch",
"merged_at": "2022-07-15T17:35... | true |
1,306,100,488 | 4,688 | Skip test_extractor only for zstd param if zstandard not installed | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-15T14:23:47 | 2022-07-15T15:27:53 | 2022-07-15T15:15:24 | Currently, if `zstandard` is not installed, `test_extractor` is skipped for all compression format parameters.
This PR fixes `test_extractor` so that if `zstandard` is not installed, `test_extractor` is skipped only for the `zstd` compression parameter, that is, it is not skipped for all the other compression parame... | albertvillanova | https://github.com/huggingface/datasets/pull/4688 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4688",
"html_url": "https://github.com/huggingface/datasets/pull/4688",
"diff_url": "https://github.com/huggingface/datasets/pull/4688.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4688.patch",
"merged_at": "2022-07-15T15:15... | true |
1,306,021,415 | 4,687 | Trigger CI also on push to main | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-15T13:11:29 | 2022-07-15T13:47:21 | 2022-07-15T13:35:23 | Currently, new CI (on GitHub Actions) is only triggered on pull requests branches when the base branch is main.
This PR also triggers the CI when a PR is merged to main branch. | albertvillanova | https://github.com/huggingface/datasets/pull/4687 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4687",
"html_url": "https://github.com/huggingface/datasets/pull/4687",
"diff_url": "https://github.com/huggingface/datasets/pull/4687.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4687.patch",
"merged_at": "2022-07-15T13:35... | true |
1,305,974,924 | 4,686 | Align logging with Transformers (again) | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4686). All of your documentation changes will be reflected on that endpoint.",
"I wasn't aware of https://github.com/huggingface/datasets/pull/1845 before opening this PR. This issue seems much more complex now ..."
] | 2022-07-15T12:24:29 | 2023-09-24T10:05:34 | 2023-07-11T18:29:27 | Fix #2832 | mariosasko | https://github.com/huggingface/datasets/pull/4686 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4686",
"html_url": "https://github.com/huggingface/datasets/pull/4686",
"diff_url": "https://github.com/huggingface/datasets/pull/4686.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4686.patch",
"merged_at": null
} | true |
1,305,861,708 | 4,685 | Fix mock fsspec | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-15T10:23:12 | 2022-07-15T13:05:03 | 2022-07-15T12:52:40 | This PR:
- Removes an unused method from `DummyTestFS`
- Refactors `mock_fsspec` to make it simpler | albertvillanova | https://github.com/huggingface/datasets/pull/4685 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4685",
"html_url": "https://github.com/huggingface/datasets/pull/4685",
"diff_url": "https://github.com/huggingface/datasets/pull/4685.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4685.patch",
"merged_at": "2022-07-15T12:52... | true |
1,305,554,654 | 4,684 | How to assign new values to Dataset? | closed | [
"Hi! One option is use `map` with a function that overwrites the labels (`dset = dset.map(lamba _: {\"label\": 0}, features=dset.features`)). Or you can use the `remove_column` + `add_column` combination (`dset = dset.remove_columns(\"label\").add_column(\"label\", [0]*len(data)).cast(dset.features)`, but note that... | 2022-07-15T04:17:57 | 2023-03-20T15:50:41 | 2022-10-10T11:53:38 | 
Hi, if I want to change some values of the dataset, or add new columns to it, how can I do it?
For example, I want to change all the labels of the SST2 dataset to `0`:
```python
from datasets import l... | beyondguo | https://github.com/huggingface/datasets/issues/4684 | null | false |
1,305,443,253 | 4,683 | Update create dataset card docs | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-15T00:41:29 | 2022-07-18T17:26:00 | 2022-07-18T13:24:10 | This PR proposes removing the [online dataset card creator](https://huggingface.co/datasets/card-creator/) in favor of simply copy/pasting a template and using the [Datasets Tagger app](https://huggingface.co/spaces/huggingface/datasets-tagging) to generate the tags. The Tagger app provides more guidance by showing all... | stevhliu | https://github.com/huggingface/datasets/pull/4683 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4683",
"html_url": "https://github.com/huggingface/datasets/pull/4683",
"diff_url": "https://github.com/huggingface/datasets/pull/4683.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4683.patch",
"merged_at": "2022-07-18T13:24... | true |
1,304,788,215 | 4,682 | weird issue/bug with columns (dataset iterable/stream mode) | open | [] | 2022-07-14T13:26:47 | 2022-07-14T13:26:47 | null | I have a dataset online (CloverSearch/cc-news-mutlilingual) that has a bunch of columns, two of which are "score_title_maintext" and "score_title_description". the original files are jsonl formatted. I was trying to iterate through via streaming mode and grab all "score_title_description" values, but I kept getting key... | eunseojo | https://github.com/huggingface/datasets/issues/4682 | null | false |
1,304,617,484 | 4,681 | IndexError when loading ImageFolder | closed | [
"Hi, thanks for reporting! If there are no examples in ImageFolder, the `label` column is of type `ClassLabel(names=[])`, which leads to an error in [this line](https://github.com/huggingface/datasets/blob/c15b391942764152f6060b59921b09cacc5f22a6/src/datasets/arrow_writer.py#L387) as `asdict(info)` calls `Features(... | 2022-07-14T10:57:55 | 2022-07-25T12:37:54 | 2022-07-25T12:37:54 | ## Describe the bug
Loading an image dataset with `imagefolder` throws `IndexError: list index out of range` when the given folder contains a non-image file (like a csv).
## Steps to reproduce the bug
Put a csv file in a folder with images and load it:
```python
import datasets
datasets.load_dataset("imagefold... | johko | https://github.com/huggingface/datasets/issues/4681 | null | false |
1,304,534,770 | 4,680 | Dataset Viewer issue for codeparrot/xlcost-text-to-code | closed | [
"There seems to be an issue with the `C++-snippet-level` config:\r\n\r\n```python\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names(\"codeparrot/xlcost-text-to-code\", \"C++-snippet-level\")\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-server/services/... | 2022-07-14T09:45:50 | 2022-07-18T16:37:00 | 2022-07-18T16:04:36 | ### Link
https://huggingface.co/datasets/codeparrot/xlcost-text-to-code
### Description
Error
```
Server Error
Status code: 400
Exception: TypeError
Message: 'NoneType' object is not iterable
```
Before I did a minor change in the dataset script (removing some comments), the viewer was working but... | loubnabnl | https://github.com/huggingface/datasets/issues/4680 | null | false |
1,303,980,648 | 4,679 | Added method to remove excess nesting in a DatasetDict | closed | [
"Hi ! I think the issue you linked is closed and suggests to use `remove_columns`.\r\n\r\nMoreover if you end up with a dataset with an unnecessarily nested data, please modify your processing functions to not output nested data, or use `map(..., batched=True)` if you function take batches as input",
"Hi @lhoestq... | 2022-07-13T21:49:37 | 2022-07-21T15:55:26 | 2022-07-21T10:55:02 | Added the ability for a DatasetDict to remove additional nested layers within its features to avoid conflicts when collating. It is meant to accompany [this PR](https://github.com/huggingface/transformers/pull/18119) to resolve the same issue [#15505](https://github.com/huggingface/transformers/issues/15505).
@stas0... | CakeCrusher | https://github.com/huggingface/datasets/pull/4679 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4679",
"html_url": "https://github.com/huggingface/datasets/pull/4679",
"diff_url": "https://github.com/huggingface/datasets/pull/4679.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4679.patch",
"merged_at": null
} | true |
1,303,741,432 | 4,678 | Cant pass streaming dataset to dataloader after take() | open | [
"Hi! Calling `take` on an iterable/streamable dataset makes it not possible to shard the dataset, which in turn disables multi-process loading (attempts to split the workload over the shards), so to go past this limitation, you can either use single-process loading in `DataLoader` (`num_workers=None`) or fetch the ... | 2022-07-13T17:34:18 | 2022-07-14T13:07:21 | null | ## Describe the bug
I am trying to pass a streaming version of c4 to a dataloader, but it can't be passed after I call `dataset.take(n)`. Some functions such as `shuffle()` can be applied without breaking the dataloader but not take.
## Steps to reproduce the bug
```python
import datasets
import torch
dset = ... | zankner | https://github.com/huggingface/datasets/issues/4678 | null | false |
1,302,258,440 | 4,677 | Random 400 Client Error when pushing dataset | closed | [
"did you ever fix this? I'm experiencing the same",
"I am having the same issue. Even the simple example from the documentation gives me the 400 Error\r\n\r\n\r\n> from datasets import load_dataset\r\n> \r\n> dataset = load_dataset(\"stevhliu/demo\")\r\n> dataset.push_to_hub(\"processed_demo\")\r\n\r\n\r\n`reques... | 2022-07-12T15:56:44 | 2023-02-07T13:54:10 | 2023-02-07T13:54:10 | ## Describe the bug
When pushing a dataset, the client errors randomly with `Bad Request for url:...`.
At the next call, a new parquet file is created for each shard.
The client may fail at any random shard.
## Steps to reproduce the bug
```python
dataset.push_to_hub("ORG/DATASET", private=True, branch="main")
... | msis | https://github.com/huggingface/datasets/issues/4677 | null | false |
1,302,202,028 | 4,676 | Dataset.map gets stuck on _cast_to_python_objects | closed | [
"Are you able to reproduce this? My example is small enough that it should be easy to try.",
"Hi! Thanks for reporting and providing a reproducible example. Indeed, by default, `datasets` performs an expensive cast on the values returned by `map` to convert them to one of the types supported by PyArrow (the under... | 2022-07-12T15:09:58 | 2022-10-03T13:01:04 | 2022-10-03T13:01:03 | ## Describe the bug
`Dataset.map`, when fed a Huggingface Tokenizer as its map func, can sometimes spend huge amounts of time doing casts. A minimal example follows.
Not all usages suffer from this. For example, I profiled the preprocessor at https://github.com/huggingface/notebooks/blob/main/examples/question_an... | srobertjames | https://github.com/huggingface/datasets/issues/4676 | null | false |
1,302,193,649 | 4,675 | Unable to use dataset with PyTorch dataloader | open | [
"Hi! `para_crawl` has a single column of type `Translation`, which stores translation dictionaries. These dictionaries can be stored in a NumPy array but not in a PyTorch tensor since PyTorch only supports numeric types. In `datasets`, the conversion to `torch` works as follows: \r\n1. convert PyArrow table to NumP... | 2022-07-12T15:04:04 | 2022-07-14T14:17:46 | null | ## Describe the bug
When using `.with_format("torch")`, an arrow table is returned and I am unable to use it by passing it to a PyTorch DataLoader: please see the code below.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
ds = load_dataset(
... | BlueskyFR | https://github.com/huggingface/datasets/issues/4675 | null | false |
1,301,294,844 | 4,674 | Issue loading datasets -- pyarrow.lib has no attribute | closed | [
"Hi @margotwagner, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your bug: in an environment with datasets-2.3.2 and pyarrow-8.0.0, I can load the datasets without any problem:\r\n```python\r\n>>> ds = load_dataset(\"glue\", \"cola\")\r\n>>> ds\r\nDatasetDict({\r\n train: Dataset({\r\n ... | 2022-07-11T22:10:44 | 2023-02-28T18:06:55 | 2023-02-28T18:06:55 | ## Describe the bug
I am trying to load sentiment analysis datasets from huggingface, but any dataset I try to use via load_dataset, I get the same error:
`AttributeError: module 'pyarrow.lib' has no attribute 'IpcReadOptions'`
## Steps to reproduce the bug
```python
dataset = load_dataset("glue", "cola")
```
... | margotwagner | https://github.com/huggingface/datasets/issues/4674 | null | false |
1,301,010,331 | 4,673 | load_datasets on csv returns everything as a string | closed | [
"Hi @courtneysprouse, thanks for reporting.\r\n\r\nYes, you are right: by default the \"csv\" loader loads all columns as strings. \r\n\r\nYou could tweak this behavior by passing the `feature` argument to `load_dataset`, but it is also true that currently it is not possible to perform some kind of casts, due to la... | 2022-07-11T17:30:24 | 2024-11-05T03:55:10 | 2022-07-12T13:33:08 | ## Describe the bug
If you use:
`conll_dataset.to_csv("ner_conll.csv")`
It will create a csv file with all of your data as expected, however when you load it with:
`conll_dataset = load_dataset("csv", data_files="ner_conll.csv")`
everything is read in as a string. For example if I look at everything in 'n... | courtneysprouse | https://github.com/huggingface/datasets/issues/4673 | null | false |
1,300,911,467 | 4,672 | Support extract 7-zip compressed data files | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Cool! Can you please remove `Fix #3541` from the description as this PR doesn't add support for streaming/`iter_archive`, so it only partially addresses the issue?\r\n\r\nSide note:\r\nI think we can use `libarchive` (`libarchive-c` ... | 2022-07-11T15:56:51 | 2022-07-15T13:14:27 | 2022-07-15T13:02:07 | Fix partially #3541, fix #4670. | albertvillanova | https://github.com/huggingface/datasets/pull/4672 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4672",
"html_url": "https://github.com/huggingface/datasets/pull/4672",
"diff_url": "https://github.com/huggingface/datasets/pull/4672.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4672.patch",
"merged_at": "2022-07-15T13:02... | true |
1,300,385,909 | 4,671 | Dataset Viewer issue for wmt16 | closed | [
"Thanks for reporting, @lewtun.\r\n\r\n~We can't load the dataset locally, so I think this is an issue with the loading script (not the viewer).~\r\n\r\n We are investigating...",
"Recently, there was a merged PR related to this dataset:\r\n- #4554\r\n\r\nWe are looking at this...",
"Indeed, the above mentioned... | 2022-07-11T08:34:11 | 2022-09-13T13:27:02 | 2022-09-08T08:16:06 | ### Link
https://huggingface.co/datasets/wmt16
### Description
[Reported](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/12#62cb83f14c7f35284e796f9c) by a user of AutoTrain Evaluate. AFAIK this dataset was working 1-2 weeks ago, and I'm not sure how to interpret this error.
```
Status cod... | lewtun | https://github.com/huggingface/datasets/issues/4671 | null | false |
1,299,984,246 | 4,670 | Can't extract files from `.7z` zipfile using `download_and_extract` | closed | [
"Hi @bhavitvyamalik, thanks for reporting.\r\n\r\nYes, currently we do not support 7zip archive compression: I think we should.\r\n\r\nAs a workaround, you could uncompress it explicitly, like done in e.g. `samsum` dataset: \r\n\r\nhttps://github.com/huggingface/datasets/blob/fedf891a08bfc77041d575fad6c26091bc0fce5... | 2022-07-10T18:16:49 | 2022-07-15T13:02:07 | 2022-07-15T13:02:07 | ## Describe the bug
I'm adding a new dataset which is a `.7z` zip file in Google drive and contains 3 json files inside. I'm able to download the data files using `download_and_extract` but after downloading it throws this error:
```
>>> dataset = load_dataset("./datasets/mantis/")
Using custom data configuration d... | bhavitvyamalik | https://github.com/huggingface/datasets/issues/4670 | null | false |
1,299,848,003 | 4,669 | loading oscar-corpus/OSCAR-2201 raises an error | closed | [
"I had to use the appropriate token for use_auth_token. Thank you."
] | 2022-07-10T07:09:30 | 2022-07-11T09:27:49 | 2022-07-11T09:27:49 | ## Describe the bug
load_dataset('oscar-2201', 'af')
raises an error:
Traceback (most recent call last):
File "/usr/lib/python3.8/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "..python3.8/site-packages/datasets/load.py", line 1656, in load_dataset
... | vitalyshalumov | https://github.com/huggingface/datasets/issues/4669 | null | false |
1,299,735,893 | 4,668 | Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed | closed | [
"It seems like a private dataset. The viewer is currently not supported on the private datasets."
] | 2022-07-09T18:04:13 | 2022-07-11T07:47:47 | 2022-07-11T07:47:47 | ### Link
https://huggingface.co/hungnm/multilingual-amazon-review-sentiment
### Description
_No response_
### Owner
Yes | ghost | https://github.com/huggingface/datasets/issues/4668 | null | false |
1,299,735,703 | 4,667 | Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed | closed | [] | 2022-07-09T18:03:15 | 2022-07-11T07:47:15 | 2022-07-11T07:47:15 | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | ghost | https://github.com/huggingface/datasets/issues/4667 | null | false |
1,299,732,238 | 4,666 | Issues with concatenating datasets | closed | [
"Hi! I agree we should improve the features equality checks to account for this particular case. However, your code fails due to `answer_start` having the dtype `int64` instead of `int32` after loading from JSON (it's not possible to embed type precision info into a JSON file; `save_to_disk` does that for arrow fil... | 2022-07-09T17:45:14 | 2022-07-12T17:16:15 | 2022-07-12T17:16:14 | ## Describe the bug
It is impossible to concatenate datasets if a feature is sequence of dict in one dataset and a dict of sequence in another. But based on the document, it should be automatically converted.
> A [datasets.Sequence](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datas... | ChenghaoMou | https://github.com/huggingface/datasets/issues/4666 | null | false |
1,299,652,638 | 4,665 | Unable to create dataset having Python dataset script only | closed | [
"Hi @aleSuglia, thanks for reporting.\r\n\r\nWe are having a look at it. \r\n\r\nWe transfer this issue to the Community tab of the corresponding Hub dataset: https://huggingface.co/datasets/Heriot-WattUniversity/dialog-babi/discussions"
] | 2022-07-09T11:45:46 | 2022-07-11T07:10:09 | 2022-07-11T07:10:01 | ## Describe the bug
Hi there,
I'm trying to add the following dataset to Huggingface datasets: https://huggingface.co/datasets/Heriot-WattUniversity/dialog-babi/blob/
I'm trying to do so using the CLI commands but seems that this command generates the wrong `dataset_info.json` file (you can find it in the repo a... | aleSuglia | https://github.com/huggingface/datasets/issues/4665 | null | false |
1,299,571,212 | 4,664 | Add stanford dog dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @khushmeeet, thanks for your contribution.\r\n\r\nBut wouldn't it be better to add this dataset to the Hub? \r\n- https://huggingface.co/docs/datasets/share\r\n- https://huggingface.co/docs/datasets/dataset_script",
"Hi @albertv... | 2022-07-09T04:46:07 | 2022-07-15T13:30:32 | 2022-07-15T13:15:42 | This PR is for adding dataset, related to issue #4504.
We are adding Stanford dog breed dataset. It is a multi class image classification dataset.
Details can be found here - http://vision.stanford.edu/aditya86/ImageNetDogs/
Tests on dummy data is failing currently, which I am looking into. | khushmeeet | https://github.com/huggingface/datasets/pull/4664 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4664",
"html_url": "https://github.com/huggingface/datasets/pull/4664",
"diff_url": "https://github.com/huggingface/datasets/pull/4664.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4664.patch",
"merged_at": null
} | true |
1,299,298,693 | 4,663 | Add text decorators | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-08T17:51:48 | 2022-07-18T18:33:14 | 2022-07-18T18:20:49 | This PR adds some decoration to text about different modalities to make it more obvious separate guides exist for audio, vision, and text. The goal is to make it easier for users to discover these guides!
. ",
"Hi! That's weird. It seems like the error points to the `mkstemp` function, but the official docs state the following:\r\n```\r\nThere are no race condi... | 2022-07-08T01:58:11 | 2025-04-10T13:21:23 | null | ## Describe the bug
I used to see this bug with an older version of the datasets. It seems to persist.
This is my concrete scenario: I launch several evaluation jobs on a cluster in which I share the file system and I share the cache directory used by huggingface libraries. The evaluation jobs read the same *.csv ... | ioana-blue | https://github.com/huggingface/datasets/issues/4661 | null | false |
1,297,128,387 | 4,660 | Fix _resolve_single_pattern_locally on Windows with multiple drives | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Good catch ! Sorry I forgot (again) about windows paths when writing this x)"
] | 2022-07-07T09:57:30 | 2022-07-07T17:03:36 | 2022-07-07T16:52:07 | Currently, when `_resolve_single_pattern_locally` is called from a different drive than the one in `pattern`, it raises an exception:
```
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\io\parquet.py:35: in __init_... | albertvillanova | https://github.com/huggingface/datasets/pull/4660 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4660",
"html_url": "https://github.com/huggingface/datasets/pull/4660",
"diff_url": "https://github.com/huggingface/datasets/pull/4660.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4660.patch",
"merged_at": "2022-07-07T16:52... | true |
1,297,094,140 | 4,659 | Transfer CI to GitHub Actions | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot @albertvillanova ! I hope we're finally done with flakiness on windows ^^\r\n\r\nAlso thanks for paying extra attention to billing and avoiding running unnecessary jobs. Though for certain aspects (see my comments), I th... | 2022-07-07T09:29:47 | 2022-07-12T11:30:20 | 2022-07-12T11:18:25 | This PR transfers CI from CircleCI to GitHub Actions. The implementation in GitHub Actions tries to be as faithful as possible to the implementation in CircleCI and get the same output results (exceptions below).
**IMPORTANT NOTE**: The fast-fail policy (described below) is not finally implemented, so that:
- we c... | albertvillanova | https://github.com/huggingface/datasets/pull/4659 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4659",
"html_url": "https://github.com/huggingface/datasets/pull/4659",
"diff_url": "https://github.com/huggingface/datasets/pull/4659.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4659.patch",
"merged_at": "2022-07-12T11:18... | true |
1,297,001,390 | 4,658 | Transfer CI tests to GitHub Actions | closed | [] | 2022-07-07T08:10:50 | 2022-07-12T11:18:25 | 2022-07-12T11:18:25 | Let's try CI tests using GitHub Actions to see if they are more stable than on CircleCI. | albertvillanova | https://github.com/huggingface/datasets/issues/4658 | null | false |
1,296,743,133 | 4,657 | Add SQuAD2.0 Dataset | closed | [
"Hey, It's already present [here](https://huggingface.co/datasets/squad_v2) ",
"Hi! This dataset is indeed already available on the Hub. Closing."
] | 2022-07-07T03:19:36 | 2022-07-12T16:14:52 | 2022-07-12T16:14:52 | ## Adding a Dataset
- **Name:** *SQuAD2.0*
- **Description:** *Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading ... | omarespejel | https://github.com/huggingface/datasets/issues/4657 | null | false |
1,296,740,266 | 4,656 | Add Amazon-QA Dataset | closed | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/Amazon-QA)."
] | 2022-07-07T03:15:11 | 2022-07-14T02:20:12 | 2022-07-14T02:20:12 | ## Adding a Dataset
- **Name:** *Amazon-QA*
- **Description:** *The dataset is .jsonl format, where each line in the file is a json string that corresponds to a question, existing answers to the question and the extracted review snippets (relevant to the question).*
- **Paper:** *https://github.com/amazonqa/amazonqa... | omarespejel | https://github.com/huggingface/datasets/issues/4656 | null | false |
1,296,720,896 | 4,655 | Simple Wikipedia | closed | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/simple-wiki)."
] | 2022-07-07T02:51:26 | 2022-07-14T02:16:33 | 2022-07-14T02:16:33 | ## Adding a Dataset
- **Name:** *Simple Wikipedia*
- **Description:** *Two different versions of the data set now exist. Both were generated by aligning Simple English Wikipedia and English Wikipedia. A complete description of the extraction process can be found in "Simple English Wikipedia: A New Simplification Task... | omarespejel | https://github.com/huggingface/datasets/issues/4655 | null | false |
1,296,716,119 | 4,654 | Add Quora Question Triplets Dataset | closed | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/QQP_triplets)."
] | 2022-07-07T02:43:42 | 2022-07-14T02:13:50 | 2022-07-14T02:13:50 | ## Adding a Dataset
- **Name:** *Quora Question Triplets*
- **Description:** *This dataset consists of over 400,000 lines of potential question duplicate pairs. Each line contains IDs for each question in the pair, the full text for each question, and a binary value that indicates whether the line truly contains a du... | omarespejel | https://github.com/huggingface/datasets/issues/4654 | null | false |
1,296,702,834 | 4,653 | Add Altlex dataset | closed | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/altlex)."
] | 2022-07-07T02:23:02 | 2022-07-14T02:12:39 | 2022-07-14T02:12:39 | ## Adding a Dataset
- **Name:** *Altlex*
- **Description:** *Git repository for software associated with the 2016 ACL paper "Identifying Causal Relations Using Parallel Wikipedia Articles.”*
- **Paper:** *https://aclanthology.org/P16-1135.pdf*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embed... | omarespejel | https://github.com/huggingface/datasets/issues/4653 | null | false |
1,296,697,498 | 4,652 | Add Sentence Compression Dataset | closed | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/sentence-compression)."
] | 2022-07-07T02:13:46 | 2022-07-14T02:11:48 | 2022-07-14T02:11:48 | ## Adding a Dataset
- **Name:** *Sentence Compression*
- **Description:** *Large corpus of uncompressed and compressed sentences from news articles.*
- **Paper:** *https://www.aclweb.org/anthology/D13-1155/*
- **Data:** *https://github.com/google-research-datasets/sentence-compression/tree/master/data*
- **Motivat... | omarespejel | https://github.com/huggingface/datasets/issues/4652 | null | false |
1,296,689,414 | 4,651 | Add Flickr 30k Dataset | closed | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/flickr30k-captions)."
] | 2022-07-07T01:59:08 | 2022-07-14T02:09:45 | 2022-07-14T02:09:45 | ## Adding a Dataset
- **Name:** *Flickr 30k*
- **Description:** *To produce the denotation graph, we have created an image caption corpus consisting of 158,915 crowd-sourced captions describing 31,783 images. This is an extension of our previous Flickr 8k Dataset. The new images and captions focus on people involved ... | omarespejel | https://github.com/huggingface/datasets/issues/4651 | null | false |
1,296,680,037 | 4,650 | Add SPECTER dataset | open | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/SPECTER)"
] | 2022-07-07T01:41:32 | 2022-07-14T02:07:49 | null | ## Adding a Dataset
- **Name:** *SPECTER*
- **Description:** *SPECTER: Document-level Representation Learning using Citation-informed Transformers*
- **Paper:** *https://doi.org/10.18653/v1/2020.acl-main.207*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/spe... | omarespejel | https://github.com/huggingface/datasets/issues/4650 | null | false |
1,296,673,712 | 4,649 | Add PAQ dataset | closed | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/PAQ_pairs)"
] | 2022-07-07T01:29:42 | 2022-07-14T02:06:27 | 2022-07-14T02:06:27 | ## Adding a Dataset
- **Name:** *PAQ*
- **Description:** *This repository contains code and models to support the research paper PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them*
- **Paper:** *https://arxiv.org/abs/2102.07033*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/... | omarespejel | https://github.com/huggingface/datasets/issues/4649 | null | false |
1,296,659,335 | 4,648 | Add WikiAnswers dataset | closed | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/WikiAnswers)"
] | 2022-07-07T01:06:37 | 2022-07-14T02:03:40 | 2022-07-14T02:03:40 | ## Adding a Dataset
- **Name:** *WikiAnswers*
- **Description:** *The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases. Each cluster optionally contains an answer provided by WikiAnswers users.*
- **Paper:** *https://dl.acm.org/doi/10.1145/2623330.2623677*
- **Data:** *ht... | omarespejel | https://github.com/huggingface/datasets/issues/4648 | null | false |
1,296,311,270 | 4,647 | Add Reddit dataset | open | [] | 2022-07-06T19:49:18 | 2022-07-06T19:49:18 | null | ## Adding a Dataset
- **Name:** *Reddit comments (2015-2018)*
- **Description:** *Reddit is an American social news aggregation website, where users can post links, and take part in discussions on these posts. These threaded discussions provide a large corpus, which is converted into a conversational dataset using th... | omarespejel | https://github.com/huggingface/datasets/issues/4647 | null | false |
1,296,027,785 | 4,645 | Set HF_SCRIPTS_VERSION to main | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-06T15:43:21 | 2022-07-06T15:56:21 | 2022-07-06T15:45:05 | After renaming "master" to "main", the CI fails with
```
AssertionError: 'https://raw.githubusercontent.com/huggingface/datasets/main/datasets/_dummy/_dummy.py' not found in "Couldn't find a dataset script at /home/circleci/datasets/_dummy/_dummy.py or any data file in the same directory. Couldn't find '_dummy' on th... | lhoestq | https://github.com/huggingface/datasets/pull/4645 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4645",
"html_url": "https://github.com/huggingface/datasets/pull/4645",
"diff_url": "https://github.com/huggingface/datasets/pull/4645.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4645.patch",
"merged_at": "2022-07-06T15:45... | true |
1,296,018,052 | 4,644 | [Minor fix] Typo correction | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-06T15:37:02 | 2022-07-06T15:56:32 | 2022-07-06T15:45:16 | recieve -> receive | cakiki | https://github.com/huggingface/datasets/pull/4644 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4644",
"html_url": "https://github.com/huggingface/datasets/pull/4644",
"diff_url": "https://github.com/huggingface/datasets/pull/4644.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4644.patch",
"merged_at": "2022-07-06T15:45... | true |
1,295,852,650 | 4,643 | Rename master to main | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"All the mentions I found on google were simple URLs that will be redirected, so it's fine. I also checked the spaces and we should be good:\r\n- dalle-mini used to install the master branch but [it's no longer the case](https://huggi... | 2022-07-06T13:34:30 | 2022-07-06T15:36:46 | 2022-07-06T15:25:08 | This PR renames mentions of "master" by "main" in the code base for several cases:
- set the default dataset script version to "main" if the local installation of `datasets` is a dev installation
- update URLs to this github repository to use "main"
- update the DVC benchmark
- update the github workflows
- update... | lhoestq | https://github.com/huggingface/datasets/pull/4643 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4643",
"html_url": "https://github.com/huggingface/datasets/pull/4643",
"diff_url": "https://github.com/huggingface/datasets/pull/4643.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4643.patch",
"merged_at": "2022-07-06T15:25... | true |
1,295,748,083 | 4,642 | Streaming issue for ccdv/pubmed-summarization | closed | [
"Thanks for reporting @lewtun.\r\n\r\nI confirm there is an issue with streaming: it does not stream locally. ",
"Oh, after investigation, the source of the issue is in the Hub dataset loading script.\r\n\r\nI'm opening a PR on the Hub dataset.",
"I've opened a PR on their Hub dataset to support streaming: http... | 2022-07-06T12:13:07 | 2022-07-06T14:17:34 | 2022-07-06T14:17:34 | ### Link
https://huggingface.co/datasets/ccdv/pubmed-summarization
### Description
This was reported by a [user of AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/7). It seems like streaming doesn't work due to the way the dataset loading script is defined?
```
Status c... | lewtun | https://github.com/huggingface/datasets/issues/4642 | null | false |
1,295,633,250 | 4,641 | Dataset Viewer issue for kmfoda/booksum | closed | [
"Thanks for reporting, @lewtun.\r\n\r\nIt works locally in streaming mode:\r\n```\r\n{'bid': 27681,\r\n 'is_aggregate': True,\r\n 'source': 'cliffnotes',\r\n 'chapter_path': 'all_chapterized_books/27681-chapters/chapters_1_to_2.txt',\r\n 'summary_path': 'finished_summaries/cliffnotes/The Last of the Mohicans/sectio... | 2022-07-06T10:38:16 | 2022-07-06T13:25:28 | 2022-07-06T11:58:06 | ### Link
https://huggingface.co/datasets/kmfoda/booksum
### Description
A [user of AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/9) discovered this dataset cannot be streamed due to:
```
Status code: 400
Exception: ClientResponseError
Message: 401, messa... | lewtun | https://github.com/huggingface/datasets/issues/4641 | null | false |
1,295,495,699 | 4,640 | Support all split in streaming mode | open | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4640). All of your documentation changes will be reflected on that endpoint."
] | 2022-07-06T08:56:38 | 2022-07-06T15:19:55 | null | Fix #4637. | albertvillanova | https://github.com/huggingface/datasets/pull/4640 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4640",
"html_url": "https://github.com/huggingface/datasets/pull/4640",
"diff_url": "https://github.com/huggingface/datasets/pull/4640.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4640.patch",
"merged_at": null
} | true |
1,295,367,322 | 4,639 | Add HaGRID -- HAnd Gesture Recognition Image Dataset | open | [] | 2022-07-06T07:41:32 | 2022-07-06T07:41:32 | null | ## Adding a Dataset
- **Name:** HaGRID -- HAnd Gesture Recognition Image Dataset
- **Description:** We introduce a large image dataset HaGRID (HAnd Gesture Recognition Image Dataset) for hand gesture recognition (HGR) systems. You can use it for image classification or image detection tasks. Proposed dataset allows t... | osanseviero | https://github.com/huggingface/datasets/issues/4639 | null | false |
1,295,233,315 | 4,638 | The speechocean762 dataset | closed | [
"CircleCL reported two errors, but I didn't find the reason. The error message:\r\n```\r\n_________________ ERROR collecting tests/test_dataset_cards.py _________________\r\ntests/test_dataset_cards.py:53: in <module>\r\n @pytest.mark.parametrize(\"dataset_name\", get_changed_datasets(repo_path))\r\ntests/test_d... | 2022-07-06T06:17:30 | 2022-10-03T09:34:36 | 2022-10-03T09:34:36 | [speechocean762](https://www.openslr.org/101/) is a non-native English corpus for pronunciation scoring tasks. It is free for both commercial and non-commercial use.
I believe it will be easier to use if it could be available on Hugging Face. | jimbozhang | https://github.com/huggingface/datasets/pull/4638 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4638",
"html_url": "https://github.com/huggingface/datasets/pull/4638",
"diff_url": "https://github.com/huggingface/datasets/pull/4638.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4638.patch",
"merged_at": null
} | true |
1,294,818,236 | 4,637 | The "all" split breaks streaming | open | [
"Thanks for reporting @cakiki.\r\n\r\nYes, this is a bug. We are investigating it.",
"@albertvillanova Nice! Let me know if it's something I can fix my self; would love to contribtue!",
"@cakiki I was working on this but if you would like to contribute, go ahead. I will close my PR. ;)\r\n\r\nFor the moment I j... | 2022-07-05T21:56:49 | 2022-07-15T13:59:30 | null | ## Describe the bug
Not sure if this is a bug or just the way streaming works, but setting `streaming=True` did not work when setting `split="all"`
## Steps to reproduce the bug
The following works:
```python
ds = load_dataset('super_glue', 'wsc.fixed', split='all')
```
The following throws `ValueError: Bad ... | cakiki | https://github.com/huggingface/datasets/issues/4637 | null | false |
1,294,547,836 | 4,636 | Add info in docs about behavior of download_config.num_proc | closed | [] | 2022-07-05T17:01:00 | 2022-07-28T10:40:32 | 2022-07-28T10:40:32 | **Is your feature request related to a problem? Please describe.**
I went to override `download_config.num_proc` and was confused about what was happening under the hood. It would be nice to have the behavior documented a bit better so folks know what's happening when they use it.
**Describe the solution you'd li... | nateraw | https://github.com/huggingface/datasets/issues/4636 | null | false |
1,294,475,931 | 4,635 | Dataset Viewer issue for vadis/sv-ident | closed | [
"Thanks for reporting, @e-tornike \r\n\r\nSome context:\r\n- #4527 \r\n\r\nThe dataset loads locally in streaming mode:\r\n```python\r\nIn [2]: from datasets import load_dataset; ds = load_dataset(\"vadis/sv-ident\", split=\"validation\", streaming=True); item = next(iter(ds)); item\r\nUsing custom data configurati... | 2022-07-05T15:48:13 | 2022-07-06T07:13:33 | 2022-07-06T07:12:14 | ### Link
https://huggingface.co/datasets/vadis/sv-ident/viewer/default/validation
### Description
Error message when loading validation split in the viewer:
```
Status code: 400
Exception: Status400Error
Message: The split cache is empty.
```
### Owner
_No response_ | e-tornike | https://github.com/huggingface/datasets/issues/4635 | null | false |
1,294,405,251 | 4,634 | Can't load the Hausa audio dataset | closed | [
"Could you provide the error details. It is difficult to debug otherwise. Also try other config. `ha` is not a valid."
] | 2022-07-05T14:47:36 | 2022-09-13T14:07:32 | 2022-09-13T14:07:32 | common_voice_train = load_dataset("common_voice", "ha", split="train+validation") | moro23 | https://github.com/huggingface/datasets/issues/4634 | null | false |
1,294,367,783 | 4,633 | [data_files] Only match separated split names | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I ran a script to find affected datasets (just did it on non-private non-gated). Adding \"testing\" and \"evaluation\" fixes all of of them except one:\r\n- projecte-aina/cat_manynames:\thuman_annotated_testset.tsv\r\n\r\nLet me open... | 2022-07-05T14:18:11 | 2022-07-18T13:20:29 | 2022-07-18T13:07:33 | As reported in https://github.com/huggingface/datasets/issues/4477, the current pattern matching to infer which file goes into which split is too permissive. For example a file "contest.py" would be considered part of a test split (it contains "test") and "seqeval.py" as well (it contains "eval").
In this PR I made ... | lhoestq | https://github.com/huggingface/datasets/pull/4633 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4633",
"html_url": "https://github.com/huggingface/datasets/pull/4633",
"diff_url": "https://github.com/huggingface/datasets/pull/4633.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4633.patch",
"merged_at": "2022-07-18T13:07... | true |
1,294,166,880 | 4,632 | 'sort' method sorts one column only | closed | [
"Hi ! `ds.sort()` does sort the full dataset, not just one column:\r\n```python\r\nfrom datasets import *\r\n\r\nds = Dataset.from_dict({\"foo\": [3, 2, 1], \"bar\": [\"c\", \"b\", \"a\"]})\r\nprint(d.sort(\"foo\").to_pandas()\r\n# foo bar\r\n# 0 1 a\r\n# 1 2 b\r\n# 2 3 c\r\n```\r\n\r\nWhat made y... | 2022-07-05T11:25:26 | 2023-07-25T15:04:27 | 2023-07-25T15:04:27 | The 'sort' method changes the order of one column only (the one defined by the argument 'column'), thus creating a mismatch between a sample fields. I would expect it to change the order of the samples as a whole, based on the 'column' order. | shachardon | https://github.com/huggingface/datasets/issues/4632 | null | false |
1,293,545,900 | 4,631 | Update WinoBias README | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-04T20:24:40 | 2022-07-07T13:23:32 | 2022-07-07T13:11:47 | I'm adding some information about Winobias that I got from the paper :smile:
I think this makes it a bit clearer! | sashavor | https://github.com/huggingface/datasets/pull/4631 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4631",
"html_url": "https://github.com/huggingface/datasets/pull/4631",
"diff_url": "https://github.com/huggingface/datasets/pull/4631.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4631.patch",
"merged_at": "2022-07-07T13:11... | true |
1,293,470,728 | 4,630 | fix(dataset_wrappers): Fixes access to fsspec.asyn in torch_iterable_dataset.py. | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-04T18:26:55 | 2022-07-05T15:19:52 | 2022-07-05T15:08:21 | Fix #4612.
Apparently, newest `fsspec` versions do not allow access to attribute-based modules if they are not imported, such as `fsspec.async`.
Thus, @mariosasko suggested to add the missing part to the module import to allow for its access. | gugarosa | https://github.com/huggingface/datasets/pull/4630 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4630",
"html_url": "https://github.com/huggingface/datasets/pull/4630",
"diff_url": "https://github.com/huggingface/datasets/pull/4630.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4630.patch",
"merged_at": "2022-07-05T15:08... | true |
1,293,418,800 | 4,629 | Rename repo default branch to main | closed | [] | 2022-07-04T17:16:10 | 2022-07-06T15:49:57 | 2022-07-06T15:49:57 | Rename repository default branch to `main` (instead of current `master`).
Once renamed, users will have to manually update their local repos:
- [ ] Upstream:
```
git branch -m master main
git fetch upstream main
git branch -u upstream/main main
git remote set-head upstream -a
```
- [ ] Origin... | albertvillanova | https://github.com/huggingface/datasets/issues/4629 | null | false |
1,293,361,308 | 4,628 | Fix time type `_arrow_to_datasets_dtype` conversion | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-04T16:20:15 | 2022-07-07T14:08:38 | 2022-07-07T13:57:12 | Fix #4620
The issue stems from the fact that `pa.array([time_data]).type` returns `DataType(time64[unit])`, which doesn't expose the `unit` attribute, instead of `Time64Type(time64[unit])`. I believe this is a bug in PyArrow. Luckily, the both types have the same `str()`, so in this PR I call `pa.type_for_alias(str(... | mariosasko | https://github.com/huggingface/datasets/pull/4628 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4628",
"html_url": "https://github.com/huggingface/datasets/pull/4628",
"diff_url": "https://github.com/huggingface/datasets/pull/4628.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4628.patch",
"merged_at": "2022-07-07T13:57... | true |
1,293,287,798 | 4,627 | fixed duplicate calculation of spearmanr function in metrics wrapper. | closed | [
"Great, can open a PR in `evaluate` as well to optimize this.\r\n\r\nRelatedly, I wanted to add a new metric, Kendall Tau (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.kendalltau.html). If I were to open a PR with the wrapper, description, citation, docstrings, readme, etc. would it make more se... | 2022-07-04T15:02:01 | 2022-07-07T12:41:09 | 2022-07-07T12:41:09 | During _compute, the scipy.stats spearmanr function was called twice, redundantly, once for calculating the score and once for calculating the p-value, under the conditional branch where return_pvalue=True. I adjusted the _compute function to execute the spearmanr function once, store the results tuple in a temporary v... | benlipkin | https://github.com/huggingface/datasets/pull/4627 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4627",
"html_url": "https://github.com/huggingface/datasets/pull/4627",
"diff_url": "https://github.com/huggingface/datasets/pull/4627.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4627.patch",
"merged_at": "2022-07-07T12:41... | true |
1,293,256,269 | 4,626 | Add non-commercial licensing info for datasets for which we removed tags | open | [
"yep plus `license_details` also makes sense for this IMO"
] | 2022-07-04T14:32:43 | 2022-07-08T14:27:29 | null | We removed several YAML tags saying that certain datasets can't be used for commercial purposes: https://github.com/huggingface/datasets/pull/4613#discussion_r911919753
Reason for this is that we only allow tags that are part of our [supported list of licenses](https://github.com/huggingface/datasets/blob/84fc3ad73c... | lhoestq | https://github.com/huggingface/datasets/issues/4626 | null | false |
1,293,163,744 | 4,625 | Unpack `dl_manager.iter_files` to allow parallization | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Cool thanks ! Yup it sounds like the right solution.\r\n\r\nIt looks like `_generate_tables` needs to be updated as well to fix the CI"
] | 2022-07-04T13:16:58 | 2022-07-05T11:11:54 | 2022-07-05T11:00:48 | Iterate over data files outside `dl_manager.iter_files` to allow parallelization in streaming mode.
(The issue reported [here](https://discuss.huggingface.co/t/dataset-only-have-n-shard-1-when-has-multiple-shards-in-repo/19887))
PS: Another option would be to override `FilesIterable.__getitem__` to make it indexa... | mariosasko | https://github.com/huggingface/datasets/pull/4625 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4625",
"html_url": "https://github.com/huggingface/datasets/pull/4625",
"diff_url": "https://github.com/huggingface/datasets/pull/4625.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4625.patch",
"merged_at": "2022-07-05T11:00... | true |
1,293,085,058 | 4,624 | Remove all paperswithcode_id: null | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> We've been using `null` to specify that we checked on pwc but the dataset doesn't exist there.\r\n\r\n@lhoestq maybe it's better to accept it on the Hub side then? Let me know if you want us to do it Hub-side",
"Yup it's maybe be... | 2022-07-04T12:11:32 | 2023-09-24T10:05:19 | 2022-07-04T13:10:38 | On the Hub there is a validation error on the `paperswithcode_id` tag when the value is `null`:
<img width="686" alt="image" src="https://user-images.githubusercontent.com/42851186/177151825-93d341c5-25bd-41ab-96c2-c0b516d51c68.png">
We've been using `null` to specify that we checked on pwc but the dataset doesn'... | lhoestq | https://github.com/huggingface/datasets/pull/4624 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4624",
"html_url": "https://github.com/huggingface/datasets/pull/4624",
"diff_url": "https://github.com/huggingface/datasets/pull/4624.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4624.patch",
"merged_at": null
} | true |
1,293,042,894 | 4,623 | Loading MNIST as Pytorch Dataset | open | [
"Hi ! We haven't implemented the conversion from images data to PyTorch tensors yet I think\r\n\r\ncc @mariosasko ",
"So I understand:\r\n\r\nset_format() does not properly do the conversion to pytorch tensors from PIL images.\r\n\r\nSo that someone who stumbles on this can use the package:\r\n\r\n```python\r\nda... | 2022-07-04T11:33:10 | 2022-07-04T14:40:50 | null | ## Describe the bug
Conversion of MNIST dataset to pytorch fails with bug
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("mnist", split="train")
dataset.set_format('torch')
dataset[0]
print()
```
## Expected results
Expect to see torch tensors image and l... | jameschapman19 | https://github.com/huggingface/datasets/issues/4623 | null | false |
1,293,031,939 | 4,622 | Fix ImageFolder with parameters drop_metadata=True and drop_labels=False (when metadata.jsonl is present) | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq @mariosasko pls take a look at https://github.com/huggingface/datasets/pull/4622/commits/769e4c046a5bd5e3a4dbd09cfad1f4cf60677869. I modified `_generate_examples()` according to the same logic too: removed checking if `metad... | 2022-07-04T11:23:20 | 2022-07-15T14:37:23 | 2022-07-15T14:24:24 | Will fix #4621
ImageFolder raises `KeyError: 'label'` with params `drop_metadata=True` and `drop_labels=False` (if there is at least one metadata.jsonl file a data directory). This happens because metadata files are collected inside `analyze()` function regardless of `drop_metadata` value. And then the following co... | polinaeterna | https://github.com/huggingface/datasets/pull/4622 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4622",
"html_url": "https://github.com/huggingface/datasets/pull/4622",
"diff_url": "https://github.com/huggingface/datasets/pull/4622.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4622.patch",
"merged_at": "2022-07-15T14:24... | true |
1,293,030,128 | 4,621 | ImageFolder raises an error with parameters drop_metadata=True and drop_labels=False when metadata.jsonl is present | closed | [] | 2022-07-04T11:21:44 | 2022-07-15T14:24:24 | 2022-07-15T14:24:24 | ## Describe the bug
If you pass `drop_metadata=True` and `drop_labels=False` when a `data_dir` contains at least one `matadata.jsonl` file, you will get a KeyError. This is probably not a very useful case but we shouldn't get an error anyway. Asking users to move metadata files manually outside `data_dir` or pass fe... | polinaeterna | https://github.com/huggingface/datasets/issues/4621 | null | false |
1,292,797,878 | 4,620 | Data type is not recognized when using datetime.time | closed | [
"cc @mariosasko ",
"Hi, thanks for reporting! I'm investigating the issue."
] | 2022-07-04T08:13:38 | 2022-07-07T13:57:11 | 2022-07-07T13:57:11 | ## Describe the bug
Creating a dataset from a pandas dataframe with `datetime.time` format generates an error.
## Steps to reproduce the bug
```python
import pandas as pd
from datetime import time
from datasets import Dataset
df = pd.DataFrame({"feature_name": [time(1, 1, 1)]})
dataset = Dataset.from_pandas... | severo | https://github.com/huggingface/datasets/issues/4620 | null | false |
1,292,107,275 | 4,619 | np arrays get turned into native lists | open | [
"If you add the line `dataset2.set_format('np')` before calling `dataset2[0]['tmp']` it should return `np.ndarray`.\r\nI believe internally it will not store it as a list, it is only returning a list when you index it.\r\n\r\n```\r\nIn [1]: import datasets, numpy as np\r\nIn [2]: dataset = datasets.load_dataset(\"g... | 2022-07-02T17:54:57 | 2022-07-03T20:27:07 | null | ## Describe the bug
When attaching an `np.array` field, it seems that it automatically gets turned into a list (see below). Why is this happening? Could it lose precision? Is there a way to make sure this doesn't happen?
## Steps to reproduce the bug
```python
>>> import datasets, numpy as np
>>> dataset = datas... | ZhaofengWu | https://github.com/huggingface/datasets/issues/4619 | null | false |
1,292,078,225 | 4,618 | contribute data loading for object detection datasets with yolo data format | open | [
"Hi! The `imagefolder` script is already quite complex, so a standalone script sounds better. Also, I suggest we create an org on the Hub (e.g. `hf-loaders`) and store such scripts there for easier maintenance rather than having them as packaged modules (IMO only very generic loaders should be packaged). WDYT @lhoe... | 2022-07-02T15:21:59 | 2022-07-21T14:10:44 | null | **Is your feature request related to a problem? Please describe.**
At the moment, HF datasets loads [image classification datasets](https://huggingface.co/docs/datasets/image_process) out-of-the-box. There could be a data loader for loading standard object detection datasets ([original discussion here](https://hugging... | faizankshaikh | https://github.com/huggingface/datasets/issues/4618 | null | false |
1,291,307,428 | 4,615 | Fix `embed_storage` on features inside lists/sequences | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-01T11:52:08 | 2022-07-08T12:13:10 | 2022-07-08T12:01:36 | Add a dedicated function for embed_storage to always preserve the embedded/casted arrays (and to have more control over `embed_storage` in general).
Fix #4591
~~(Waiting for #4608 to be merged to mark this PR as ready for review - required for fixing `xgetsize` in private repos)~~ Done! | mariosasko | https://github.com/huggingface/datasets/pull/4615 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4615",
"html_url": "https://github.com/huggingface/datasets/pull/4615",
"diff_url": "https://github.com/huggingface/datasets/pull/4615.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4615.patch",
"merged_at": "2022-07-08T12:01... | true |
1,291,218,020 | 4,614 | Ensure ConcatenationTable.cast uses target_schema metadata | closed | [
"Hi @lhoestq, Thanks for the detailed comment. I've tested the suggested approach and can confirm it works for the testcase outlined above! The PR is updated with the changes.",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-01T10:22:08 | 2022-07-19T13:48:45 | 2022-07-19T13:36:24 | Currently, `ConcatenationTable.cast` does not use target_schema metadata when casting subtables. This causes an issue when using cast_column and the underlying table is a ConcatenationTable.
Code example of where issue arrises:
```
from datasets import Dataset, Image
column1 = [0, 1]
image_paths = ['/images/im... | dtuit | https://github.com/huggingface/datasets/pull/4614 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4614",
"html_url": "https://github.com/huggingface/datasets/pull/4614",
"diff_url": "https://github.com/huggingface/datasets/pull/4614.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4614.patch",
"merged_at": "2022-07-19T13:36... | true |
1,291,181,193 | 4,613 | Align/fix license metadata info | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you thank you! Let's merge and pray? 😱 ",
"I just need to add `license_details` to the validator and yup we can merge"
] | 2022-07-01T09:50:50 | 2022-07-01T12:53:57 | 2022-07-01T12:42:47 | fix bad "other-*" licenses and add the corresponding "license_details" when relevant | julien-c | https://github.com/huggingface/datasets/pull/4613 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4613",
"html_url": "https://github.com/huggingface/datasets/pull/4613",
"diff_url": "https://github.com/huggingface/datasets/pull/4613.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4613.patch",
"merged_at": "2022-07-01T12:42... | true |
1,290,984,660 | 4,612 | Release 2.3.0 broke custom iterable datasets | closed | [
"Apparently, `fsspec` does not allow access to attribute-based modules anymore, such as `fsspec.async`.\r\n\r\nHowever, this is a fairly simple fix:\r\n- Change the import to: `from fsspec import asyn`;\r\n- Change line 18 to: `asyn.iothread[0] = None`;\r\n- Change line 19 to `asyn.loop[0] = None`.",
"Hi! I think... | 2022-07-01T06:46:07 | 2022-07-05T15:08:21 | 2022-07-05T15:08:21 | ## Describe the bug
Trying to iterate examples from custom iterable dataset fails to bug introduced in `torch_iterable_dataset.py` since the release of 2.3.0.
## Steps to reproduce the bug
```python
next(iter(custom_iterable_dataset))
```
## Expected results
`next(iter(custom_iterable_dataset))` should retu... | aapot | https://github.com/huggingface/datasets/issues/4612 | null | false |
1,290,940,874 | 4,611 | Preserve member order by MockDownloadManager.iter_archive | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-01T05:48:20 | 2022-07-01T16:59:11 | 2022-07-01T16:48:28 | Currently, `MockDownloadManager.iter_archive` yields paths to archive members in an order given by `path.rglob("*")`, which migh not be the same order as in the original archive.
See issue in:
- https://github.com/huggingface/datasets/pull/4579#issuecomment-1172135027
This PR fixes the order of the members yield... | albertvillanova | https://github.com/huggingface/datasets/pull/4611 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4611",
"html_url": "https://github.com/huggingface/datasets/pull/4611",
"diff_url": "https://github.com/huggingface/datasets/pull/4611.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4611.patch",
"merged_at": "2022-07-01T16:48... | true |
1,290,603,827 | 4,610 | codeparrot/github-code failing to load | closed | [
"I believe the issue is in `codeparrot/github-code`. `base_path` param is missing - https://huggingface.co/datasets/codeparrot/github-code/blob/main/github-code.py#L169\r\n\r\nFunction definition has changed.\r\nhttps://github.com/huggingface/datasets/blob/0e1c629cfb9f9ba124537ba294a0ec451584da5f/src/datasets/data_... | 2022-06-30T20:24:48 | 2022-07-05T14:24:13 | 2022-07-05T09:19:56 | ## Describe the bug
codeparrot/github-code fails to load with a `TypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path'`
## Steps to reproduce the bug
```python
from datasets import load_dataset
```
## Expected results
loaded dataset object
## Actual results
`... | PyDataBlog | https://github.com/huggingface/datasets/issues/4610 | null | false |
1,290,392,083 | 4,609 | librispeech dataset has to download whole subset when specifing the split to use | closed | [
"Hi! You can use streaming to fetch only a subset of the data:\r\n```python\r\nraw_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"train.100\", streaming=True)\r\n```\r\nAlso, we plan to make it possible to download a particular split in the non-streaming mode, but this task is not easy due to how ou... | 2022-06-30T16:38:24 | 2022-07-12T21:44:32 | 2022-07-12T21:44:32 | ## Describe the bug
librispeech dataset has to download whole subset when specifing the split to use
## Steps to reproduce the bug
see below
# Sample code to reproduce the bug
```
!pip install datasets
from datasets import load_dataset
raw_dataset = load_dataset("librispeech_asr", "clean", split="train.100")
... | sunhaozhepy | https://github.com/huggingface/datasets/issues/4609 | null | false |
1,290,298,002 | 4,608 | Fix xisfile, xgetsize, xisdir, xlistdir in private repo | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Added tests for xisfile, xgetsize, xlistdir and xglob for private repos, and also tests for xwalk that was untested"
] | 2022-06-30T15:23:21 | 2022-07-06T12:45:59 | 2022-07-06T12:34:19 | `xisfile` is working in a private repository when passing a chained URL to a file inside an archive, e.g. `zip://a.txt::https://huggingface/datasets/username/dataset_name/resolve/main/data.zip`. However it's not working when passing a simple file `https://huggingface/datasets/username/dataset_name/resolve/main/data.zip... | lhoestq | https://github.com/huggingface/datasets/pull/4608 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4608",
"html_url": "https://github.com/huggingface/datasets/pull/4608",
"diff_url": "https://github.com/huggingface/datasets/pull/4608.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4608.patch",
"merged_at": "2022-07-06T12:34... | true |
1,290,171,941 | 4,607 | Align more metadata with other repo types (models,spaces) | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I just set a default value (None) for the deprecated licenses and languages fields, which should fix most of the CI failures.\r\n\r\nNote that the CI should still be red because you edited many dataset cards and they're still missing... | 2022-06-30T13:52:12 | 2022-07-01T12:00:37 | 2022-07-01T11:49:14 | see also associated PR on the `datasets-tagging` Space: https://huggingface.co/spaces/huggingface/datasets-tagging/discussions/2 (to merge after this one is merged) | julien-c | https://github.com/huggingface/datasets/pull/4607 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4607",
"html_url": "https://github.com/huggingface/datasets/pull/4607",
"diff_url": "https://github.com/huggingface/datasets/pull/4607.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4607.patch",
"merged_at": "2022-07-01T11:49... | true |
1,290,083,534 | 4,606 | evaluation result changes after `datasets` version change | closed | [
"Hi! The GH/no-namespace datasets versioning is synced with the version of the `datasets` lib, which means that the `wikiann` script was modified between the two compared versions. In this scenario, you can ensure reproducibility by pinning the script version, which is done by passing `revision=\"x.y.z\"` (e.g. `re... | 2022-06-30T12:43:26 | 2023-07-25T15:05:26 | 2023-07-25T15:05:26 | ## Describe the bug
evaluation result changes after `datasets` version change
## Steps to reproduce the bug
1. Train a model on WikiAnn
2. reload the ckpt -> test accuracy becomes same as eval accuracy
3. such behavior is gone after downgrading `datasets`
https://colab.research.google.com/drive/1kYz7-aZRGdaya... | thnkinbtfly | https://github.com/huggingface/datasets/issues/4606 | null | false |
1,290,058,970 | 4,605 | Dataset Viewer issue for boris/gis_filtered | closed | [
"Yes, this dataset is \"gated\": you first have to go to https://huggingface.co/datasets/boris/gis_filtered and click \"Access repository\" (if you accept to share your contact information with the repository authors).",
"I already did that, it returns error when using streaming",
"Oh, sorry, I misread. Looking... | 2022-06-30T12:23:34 | 2022-07-06T12:34:19 | 2022-07-06T12:34:19 | ### Link
https://huggingface.co/datasets/boris/gis_filtered/viewer/boris--gis_filtered/train
### Description
When I try to access this from the website I get this error:
Status code: 400
Exception: ClientResponseError
Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datase... | WaterKnight1998 | https://github.com/huggingface/datasets/issues/4605 | null | false |
1,289,963,962 | 4,604 | Update CI Windows orb | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-30T11:00:31 | 2022-06-30T13:33:11 | 2022-06-30T13:22:26 | This PR tries to fix recurrent random CI failures on Windows.
After 2 runs, it seems to have fixed the issue.
Fix #4603. | albertvillanova | https://github.com/huggingface/datasets/pull/4604 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4604",
"html_url": "https://github.com/huggingface/datasets/pull/4604",
"diff_url": "https://github.com/huggingface/datasets/pull/4604.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4604.patch",
"merged_at": "2022-06-30T13:22... | true |
1,289,963,331 | 4,603 | CI fails recurrently and randomly on Windows | closed | [] | 2022-06-30T10:59:58 | 2022-06-30T13:22:25 | 2022-06-30T13:22:25 | As reported by @lhoestq,
The windows CI is currently flaky: some dependencies like `aiobotocore`, `multiprocess` and `seqeval` sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs:
```
Building wheel for seqeval (setup.py): started
Running command 'C:\to... | albertvillanova | https://github.com/huggingface/datasets/issues/4603 | null | false |
1,289,950,379 | 4,602 | Upgrade setuptools in windows CI | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-30T10:48:41 | 2023-09-24T10:05:10 | 2022-06-30T12:46:17 | The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs
```
Building wheel for seqeval (setup.py): started
Running command 'C:\tools\miniconda3\envs\py37\python.exe... | lhoestq | https://github.com/huggingface/datasets/pull/4602 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4602",
"html_url": "https://github.com/huggingface/datasets/pull/4602",
"diff_url": "https://github.com/huggingface/datasets/pull/4602.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4602.patch",
"merged_at": null
} | true |
1,289,924,715 | 4,601 | Upgrade pip in WIN CI | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"It failed terribly"
] | 2022-06-30T10:25:42 | 2023-09-24T10:04:25 | 2022-06-30T10:43:38 | The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs
```
Building wheel for seqeval (setup.py): started
Running command 'C:\tools\miniconda3\envs\py37\python.exe... | lhoestq | https://github.com/huggingface/datasets/pull/4601 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4601",
"html_url": "https://github.com/huggingface/datasets/pull/4601",
"diff_url": "https://github.com/huggingface/datasets/pull/4601.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4601.patch",
"merged_at": null
} | true |
1,289,177,042 | 4,600 | Remove multiple config section | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-29T19:09:21 | 2022-07-04T17:41:20 | 2022-07-04T17:29:41 | This PR removes docs for a future feature and redirects to #4578 instead. See this [discussion](https://huggingface.slack.com/archives/C034N0A7H09/p1656107063801969) for more details :) | stevhliu | https://github.com/huggingface/datasets/pull/4600 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4600",
"html_url": "https://github.com/huggingface/datasets/pull/4600",
"diff_url": "https://github.com/huggingface/datasets/pull/4600.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4600.patch",
"merged_at": "2022-07-04T17:29... | true |
1,288,849,933 | 4,599 | Smooth-BLEU bug fixed | closed | [
"Thanks @Aktsvigun for your fix.\r\n\r\nHowever, metrics in `datasets` are in deprecation mode:\r\n- #4739\r\n\r\nYou should transfer this PR to the `evaluate` library: https://github.com/huggingface/evaluate\r\n\r\nJust for context, here the link to the PR by @Aktsvigun on tensorflow/nmt:\r\n- https://github.com/t... | 2022-06-29T14:51:42 | 2022-09-23T07:42:40 | 2022-09-23T07:42:40 | Hi,
the current implementation of smooth-BLEU contains a bug: it smoothes unigrams as well. Consequently, when both the reference and translation consist of totally different tokens, it anyway returns a non-zero value (please see the attached image).
This however contradicts the source paper suggesting the smoot... | Aktsvigun | https://github.com/huggingface/datasets/pull/4599 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4599",
"html_url": "https://github.com/huggingface/datasets/pull/4599",
"diff_url": "https://github.com/huggingface/datasets/pull/4599.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4599.patch",
"merged_at": null
} | true |
1,288,774,514 | 4,598 | Host financial_phrasebank data on the Hub | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-29T13:59:31 | 2022-07-01T09:41:14 | 2022-07-01T09:29:36 |
Fix #4597. | albertvillanova | https://github.com/huggingface/datasets/pull/4598 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4598",
"html_url": "https://github.com/huggingface/datasets/pull/4598",
"diff_url": "https://github.com/huggingface/datasets/pull/4598.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4598.patch",
"merged_at": "2022-07-01T09:29... | true |
1,288,672,007 | 4,597 | Streaming issue for financial_phrasebank | closed | [
"cc @huggingface/datasets: it seems like https://www.researchgate.net/ is flaky for datasets hosting (I put the \"hosted-on-google-drive\" tag since it's the same kind of issue I think)",
"Let's see if their license allows hosting their data on the Hub.",
"License is Creative Commons Attribution-NonCommercial-S... | 2022-06-29T12:45:43 | 2022-07-01T09:29:36 | 2022-07-01T09:29:36 | ### Link
https://huggingface.co/datasets/financial_phrasebank/viewer/sentences_allagree/train
### Description
As reported by a community member using [AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/5#62bc217436d0e5d316a768f0), there seems to be a problem streaming this dat... | lewtun | https://github.com/huggingface/datasets/issues/4597 | null | false |
1,288,381,735 | 4,596 | Dataset Viewer issue for universal_dependencies | closed | [
"Thanks, looking at it!",
"Finally fixed! We updated the dataset viewer and it fixed the issue.\r\n\r\nhttps://huggingface.co/datasets/universal_dependencies/viewer/aqz_tudet/train\r\n\r\n<img width=\"1561\" alt=\"Capture d’écran 2022-09-07 à 13 29 18\" src=\"https://user-images.githubusercontent.com/1676121/18... | 2022-06-29T08:50:29 | 2022-09-07T11:29:28 | 2022-09-07T11:29:27 | ### Link
https://huggingface.co/datasets/universal_dependencies
### Description
invalid json response body at https://datasets-server.huggingface.co/splits?dataset=universal_dependencies reason: Unexpected token I in JSON at position 0
### Owner
_No response_ | Jordy-VL | https://github.com/huggingface/datasets/issues/4596 | null | false |
1,288,275,976 | 4,595 | Dataset Viewer issue with False positive PII redaction | closed | [
"The value is in the data, it's not an issue with the \"dataset-viewer\".\r\n\r\n<img width=\"1161\" alt=\"Capture d’écran 2022-06-29 à 10 25 51\" src=\"https://user-images.githubusercontent.com/1676121/176389325-4d2a9a7f-1583-45b8-aa7a-960ffaa6a36a.png\">\r\n\r\n Maybe open a PR: https://huggingface.co/datasets/... | 2022-06-29T07:15:57 | 2022-06-29T08:29:41 | 2022-06-29T08:27:49 | ### Link
https://huggingface.co/datasets/cakiki/rosetta-code
### Description
Hello, I just noticed an entry being redacted that shouldn't have been:
`RootMeanSquare@Range[10]` is being displayed as `[email protected][10]`
### Owner
_No response_ | cakiki | https://github.com/huggingface/datasets/issues/4595 | null | false |
1,288,070,023 | 4,594 | load_from_disk suggests incorrect fix when used to load DatasetDict | closed | [] | 2022-06-29T01:40:01 | 2022-06-29T04:03:44 | 2022-06-29T04:03:44 | Edit: Please feel free to remove this issue. The problem was not the error message but the fact that the DatasetDict.load_from_disk does not support loading nested splits, i.e. if one of the splits is itself a DatasetDict. If nesting splits is an antipattern, perhaps the load_from_disk function can throw a warning indi... | dvsth | https://github.com/huggingface/datasets/issues/4594 | null | false |
1,288,067,699 | 4,593 | Fix error message when using load_from_disk to load DatasetDict | closed | [] | 2022-06-29T01:34:27 | 2022-06-29T04:01:59 | 2022-06-29T04:01:39 | Issue #4594
Issue: When `datasets.load_from_disk` is wrongly used to load a `DatasetDict`, the error message suggests using `datasets.load_from_disk`, which is the same function that generated the error.
Fix: The appropriate function which should be suggested instead is `datasets.dataset_dict.load_from_disk`.
Chan... | dvsth | https://github.com/huggingface/datasets/pull/4593 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4593",
"html_url": "https://github.com/huggingface/datasets/pull/4593",
"diff_url": "https://github.com/huggingface/datasets/pull/4593.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4593.patch",
"merged_at": null
} | true |
1,288,029,377 | 4,592 | Issue with jalFaizy/detect_chess_pieces when running datasets-cli test | closed | [
"Hi @faizankshaikh\r\n\r\nPlease note that we have recently launched the Community feature, specifically targeted to create Discussions (about issues/questions/asking-for-help) on each Dataset on the Hub:\r\n- Blog post: https://huggingface.co/blog/community-update\r\n- Docs: https://huggingface.co/docs/hub/reposit... | 2022-06-29T00:15:54 | 2022-06-29T10:30:03 | 2022-06-29T07:49:27 | ### Link
https://huggingface.co/datasets/jalFaizy/detect_chess_pieces
### Description
I am trying to write a appropriate data loader for [a custom dataset](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces) using [this script](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/blob/main/detect_c... | faizankshaikh | https://github.com/huggingface/datasets/issues/4592 | null | false |
1,288,021,332 | 4,591 | Can't push Images to hub with manual Dataset | closed | [
"Hi, thanks for reporting! This issue stems from the changes introduced in https://github.com/huggingface/datasets/pull/4282 (cc @lhoestq), in which list casts are ignored if they don't change the list type (required to preserve `null` values). And `push_to_hub` does a special cast to embed external image files but... | 2022-06-29T00:01:23 | 2022-07-08T12:01:36 | 2022-07-08T12:01:35 | ## Describe the bug
If I create a dataset including an 'Image' feature manually, when pushing to hub decoded images are not pushed,
instead it looks for image where image local path is/used to be.
This doesn't (at least didn't used to) happen with imagefolder. I want to build dataset manually because it is compli... | cceyda | https://github.com/huggingface/datasets/issues/4591 | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.