id int64 599M 3.48B | number int64 1 7.8k | title stringlengths 1 290 | state stringclasses 2
values | comments listlengths 0 30 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-10-05 06:37:50 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-10-05 10:32:43 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-10-01 13:56:03 ⌀ | body stringlengths 0 228k ⌀ | user stringlengths 3 26 | html_url stringlengths 46 51 | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,613,439,709 | 5,619 | unpin fsspec | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-07T13:22:41 | 2023-03-07T13:47:01 | 2023-03-07T13:39:02 | close https://github.com/huggingface/datasets/issues/5618 | lhoestq | https://github.com/huggingface/datasets/pull/5619 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5619",
"html_url": "https://github.com/huggingface/datasets/pull/5619",
"diff_url": "https://github.com/huggingface/datasets/pull/5619.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5619.patch",
"merged_at": "2023-03-07T13:39... | true |
1,612,977,934 | 5,618 | Unpin fsspec < 2023.3.0 once issue fixed | closed | [] | 2023-03-07T08:41:51 | 2023-03-07T13:39:03 | 2023-03-07T13:39:03 | Unpin `fsspec` upper version once root cause of our CI break is fixed.
See:
- #5614 | albertvillanova | https://github.com/huggingface/datasets/issues/5618 | null | false |
1,612,947,422 | 5,617 | Fix CI by temporarily pinning fsspec < 2023.3.0 | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-07T08:18:20 | 2023-03-07T08:44:55 | 2023-03-07T08:37:28 | As a hotfix for our CI, temporarily pin `fsspec`:
Fix #5616.
Until root cause is fixed, see:
- #5614 | albertvillanova | https://github.com/huggingface/datasets/pull/5617 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5617",
"html_url": "https://github.com/huggingface/datasets/pull/5617",
"diff_url": "https://github.com/huggingface/datasets/pull/5617.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5617.patch",
"merged_at": "2023-03-07T08:37... | true |
1,612,932,508 | 5,616 | CI is broken after fsspec-2023.3.0 release | closed | [] | 2023-03-07T08:06:39 | 2023-03-07T08:37:29 | 2023-03-07T08:37:29 | As reported by @lhoestq, our CI is broken after `fsspec` 2023.3.0 release:
```
FAILED tests/test_filesystem.py::test_compression_filesystems[Bz2FileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt']
At index 0 diff: {'name': 'file.txt', 'size': 70, 'type': 'file', 'created': 1678175677... | albertvillanova | https://github.com/huggingface/datasets/issues/5616 | null | false |
1,612,552,653 | 5,615 | IterableDataset.add_column is unable to accept another IterableDataset as a parameter. | closed | [
"Hi! You can use `concatenate_datasets([ids1, ids2], axis=1)` to do this."
] | 2023-03-07T01:52:00 | 2023-03-09T15:24:05 | 2023-03-09T15:23:54 | ### Describe the bug
`IterableDataset.add_column` occurs an exception when passing another `IterableDataset` as a parameter.
The method seems to accept only eager evaluated values.
https://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/iterable_dataset.py#L1388-L1391
... | zsaladin | https://github.com/huggingface/datasets/issues/5615 | null | false |
1,611,896,357 | 5,614 | Fix archive fs test | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-06T17:28:09 | 2023-03-07T13:27:50 | 2023-03-07T13:20:57 | null | lhoestq | https://github.com/huggingface/datasets/pull/5614 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5614",
"html_url": "https://github.com/huggingface/datasets/pull/5614",
"diff_url": "https://github.com/huggingface/datasets/pull/5614.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5614.patch",
"merged_at": "2023-03-07T13:20... | true |
1,611,875,473 | 5,613 | Version mismatch with multiprocess and dill on Python 3.10 | open | [
"Sorry, I just found https://github.com/apache/beam/issues/24458. It seems this issue is being worked on. ",
"Reopening, since I think the docs should inform the user of this problem. For example, [this page](https://huggingface.co/docs/datasets/installation) says \r\n> Datasets is tested on Python 3.7+.\r\n\r\nb... | 2023-03-06T17:14:41 | 2024-04-05T20:13:52 | null | ### Describe the bug
Grabbing the latest version of `datasets` and `apache-beam` with `poetry` using Python 3.10 gives a crash at runtime. The crash is
```
File "/Users/adpauls/sc/git/DSI-transformers/data/NQ/create_NQ_train_vali.py", line 1, in <module>
import datasets
File "/Users/adpauls/Library/Caches/... | adampauls | https://github.com/huggingface/datasets/issues/5613 | null | false |
1,611,262,510 | 5,612 | Arrow map type in parquet files unsupported | open | [
"I'm attaching a minimal reproducible example:\r\n```python\r\nfrom datasets import load_dataset\r\nimport pyarrow as pa\r\nimport pyarrow.parquet as pq\r\n\r\ntable_with_map = pa.Table.from_pydict(\r\n {\"a\": [1, 2], \"b\": [[(\"a\", 2)], [(\"b\", 4)]]},\r\n schema=pa.schema({\"a\": pa.int32(), \"b\": pa.ma... | 2023-03-06T12:03:24 | 2024-03-15T18:56:12 | null | ### Describe the bug
When I try to load parquet files that were processed with Spark, I get the following issue:
`ValueError: Arrow type map<string, string ('warc_headers')> does not have a datasets dtype equivalent.`
Strangely, loading the dataset with `streaming=True` solves the issue.
### Steps to reproduce ... | TevenLeScao | https://github.com/huggingface/datasets/issues/5612 | null | false |
1,611,197,906 | 5,611 | add Dataset.to_list | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, thanks for working on this! `Table.to_pylist` requires PyArrow 7.0+, and our minimal version requirement is 6.0, so we need to bump the version requirement to avoid CI failure. I'll do this in a separate PR.",
"<details>\n<summ... | 2023-03-06T11:21:57 | 2023-03-27T13:34:19 | 2023-03-27T13:26:38 | close https://github.com/huggingface/datasets/issues/5606
This PR is for adding the `Dataset.to_list` method.
Thank you in advance.
| kyoto7250 | https://github.com/huggingface/datasets/pull/5611 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5611",
"html_url": "https://github.com/huggingface/datasets/pull/5611",
"diff_url": "https://github.com/huggingface/datasets/pull/5611.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5611.patch",
"merged_at": "2023-03-27T13:26... | true |
1,610,698,006 | 5,610 | use datasets streaming mode in trainer ddp mode cause memory leak | open | [
"Same problem, \r\ntransformers 4.28.1\r\ndatasets 2.12.0\r\n\r\nleak around 100Mb per 10 seconds when use dataloader_num_werker > 0 in training argumennts for transformer train, possile bug in transformers repo, but still not found solution :(\r\n",
"found an article described a problem, may be helpful for someb... | 2023-03-06T05:26:49 | 2024-03-07T01:11:32 | null | ### Describe the bug
use datasets streaming mode in trainer ddp mode cause memory leak
### Steps to reproduce the bug
import os
import time
import datetime
import sys
import numpy as np
import random
import torch
from torch.utils.data import Dataset, DataLoader, random_split, RandomSampler, Sequenti... | gromzhu | https://github.com/huggingface/datasets/issues/5610 | null | false |
1,610,062,862 | 5,609 | `load_from_disk` vs `load_dataset` performance. | open | [
"Hi! We've recently made some improvements to `save_to_disk`/`list_to_disk` (100x faster in some scenarios), so it would help if you could install `datasets` directly from `main` (`pip install git+https://github.com/huggingface/datasets.git`) and re-run the \"benchmark\".",
"Great to hear! I'll give it a try when... | 2023-03-05T05:27:15 | 2023-07-13T18:48:05 | null | ### Describe the bug
I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices:
1. Use `load_dataset` each time, relying on the cache mechanism, and re-run my filtering.
2. `save_to_di... | davidgilbertson | https://github.com/huggingface/datasets/issues/5609 | null | false |
1,609,996,563 | 5,608 | audiofolder only creates dataset of 13 rows (files) when the data folder it's reading from has 20,000 mp3 files. | closed | [
"Hi!\r\n\r\n> naming convention of mp3 files\r\n\r\nYes, this could be the problem. MP3 files should end with `.mp3`/`.MP3` to be recognized as audio files.\r\n\r\nIf the file names are not the culprit, can you paste the audio folder's directory structure to help us reproduce the error (e.g., by running the `tree ... | 2023-03-05T00:14:45 | 2023-03-12T00:02:57 | 2023-03-12T00:02:57 | ### Describe the bug
x = load_dataset("audiofolder", data_dir="x")
When running this, x is a dataset of 13 rows (files) when it should be 20,000 rows (files) as the data_dir "x" has 20,000 mp3 files. Does anyone know what could possibly cause this (naming convention of mp3 files, etc.)
### Steps to reproduce the b... | jcho19 | https://github.com/huggingface/datasets/issues/5608 | null | false |
1,609,166,035 | 5,607 | Fix outdated `verification_mode` values | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-03T19:50:29 | 2023-03-09T17:34:13 | 2023-03-09T17:27:07 | ~I think it makes sense not to save `dataset_info.json` file to a dataset cache directory when loading dataset with `verification_mode="no_checks"` because otherwise when next time the dataset is loaded **without** `verification_mode="no_checks"`, it will be loaded successfully, despite some values in info might not co... | polinaeterna | https://github.com/huggingface/datasets/pull/5607 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5607",
"html_url": "https://github.com/huggingface/datasets/pull/5607",
"diff_url": "https://github.com/huggingface/datasets/pull/5607.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5607.patch",
"merged_at": "2023-03-09T17:27... | true |
1,608,911,632 | 5,606 | Add `Dataset.to_list` to the API | closed | [
"Hello, I have an interest in this issue.\r\nIs the `Dataset.to_dict` you are describing correct in the code here?\r\n\r\nhttps://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/arrow_dataset.py#L4633-L4667",
"Yes, this is where `Dataset.to_dict` is defined.",
"#self-a... | 2023-03-03T16:17:10 | 2023-03-27T13:26:40 | 2023-03-27T13:26:40 | Since there is `Dataset.from_list` in the API, we should also add `Dataset.to_list` to be consistent.
Regarding the implementation, we can re-use `Dataset.to_dict`'s code and replace the `to_pydict` calls with `to_pylist`. | mariosasko | https://github.com/huggingface/datasets/issues/5606 | null | false |
1,608,865,460 | 5,605 | Update README logo | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Are you sure it's safe to remove? https://github.com/huggingface/datasets/pull/3866",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benc... | 2023-03-03T15:46:31 | 2023-03-03T21:57:18 | 2023-03-03T21:50:17 | null | gary149 | https://github.com/huggingface/datasets/pull/5605 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5605",
"html_url": "https://github.com/huggingface/datasets/pull/5605",
"diff_url": "https://github.com/huggingface/datasets/pull/5605.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5605.patch",
"merged_at": "2023-03-03T21:50... | true |
1,608,304,775 | 5,604 | Problems with downloading The Pile | closed | [
"Hi! \r\n\r\n\r\nYou can specify `download_config=DownloadConfig(resume_download=True))` in `load_dataset` to resume the download when re-running the code after the timeout error:\r\n```python\r\nfrom datasets import load_dataset, DownloadConfig\r\ndataset = load_dataset('the_pile', split='train', cache_dir='F:\\da... | 2023-03-03T09:52:08 | 2023-10-14T02:15:52 | 2023-03-24T12:44:25 | ### Describe the bug
The downloads in the screenshot seem to be interrupted after some time and the last download throws a "Read timed out" error.

Here are the downloaded files:
. All of your documentation changes will be reflected on that endpoint.",
"This is a great PR! Thinking about the UX though, maybe we could do it without the extra argument? Before this PR, the logic in `to_tf_dataset` was... | 2023-03-02T15:51:12 | 2023-04-12T15:54:53 | null | This PR introduces new logic to `to_tf_dataset` affecting the returned data structure, enabling a dictionary structure to be returned, even if only one feature column is selected.
If the passed in `columns` or `label_cols` to `to_tf_dataset` are a list, they are returned as a dictionary, respectively. If they are a... | amyeroberts | https://github.com/huggingface/datasets/pull/5602 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5602",
"html_url": "https://github.com/huggingface/datasets/pull/5602",
"diff_url": "https://github.com/huggingface/datasets/pull/5602.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5602.patch",
"merged_at": null
} | true |
1,606,685,976 | 5,601 | Authorization error | closed | [
"Hi! \r\n\r\nIt's better to report this kind of issue in the `huggingface_hub` repo, so if you still haven't resolved it, I suggest you open an issue there.",
"Yeah, I solved it. Problem was in osxkeychain. When I do `hugginface-cli login` it's add token with default account (username)`hg_user` but my repo cont... | 2023-03-02T12:08:39 | 2023-03-14T16:55:35 | 2023-03-14T16:55:34 | ### Describe the bug
Get `Authorization error` when try to push data into hugginface datasets hub.
### Steps to reproduce the bug
I did all steps in the [tutorial](https://huggingface.co/docs/datasets/share),
1. `huggingface-cli login` with WRITE token
2. `git lfs install`
3. `git clone https://huggingfa... | OleksandrKorovii | https://github.com/huggingface/datasets/issues/5601 | null | false |
1,606,585,596 | 5,600 | Dataloader getitem not working for DreamboothDatasets | closed | [
"Hi! \r\n\r\n> (see example of DreamboothDatasets)\r\n\r\n\r\nCould you please provide a link to it? If you are referring to the example in the `diffusers` repo, your issue is unrelated to `datasets` as that example uses `Dataset` from PyTorch to load data."
] | 2023-03-02T11:00:27 | 2023-03-13T17:59:35 | 2023-03-13T17:59:35 | ### Describe the bug
Dataloader getitem is not working as before (see example of [DreamboothDatasets](https://github.com/huggingface/peft/blob/main/examples/lora_dreambooth/train_dreambooth.py#L451C14-L529))
moving Datasets to 2.8.0 solved the issue.
### Steps to reproduce the bug
1- using DreamBoothDataset ... | salahiguiliz | https://github.com/huggingface/datasets/issues/5600 | null | false |
1,605,018,478 | 5,598 | Fix push_to_hub with no dataset_infos | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-01T13:54:06 | 2023-03-02T13:47:13 | 2023-03-02T13:40:17 | As reported in https://github.com/vijaydwivedi75/lrgb/issues/10, `push_to_hub` fails if the remote repository already exists and has a README.md without `dataset_info` in the YAML tags
cc @clefourrier | lhoestq | https://github.com/huggingface/datasets/pull/5598 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5598",
"html_url": "https://github.com/huggingface/datasets/pull/5598",
"diff_url": "https://github.com/huggingface/datasets/pull/5598.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5598.patch",
"merged_at": "2023-03-02T13:40... | true |
1,604,928,721 | 5,597 | in-place dataset update | closed | [
"We won't support in-place modifications since `datasets` is based on the Apache Arrow format which doesn't support in-place modifications.\r\n\r\nIn your case the old dataset is garbage collected pretty quickly so you won't have memory issues.\r\n\r\nNote that datasets loaded from disk (memory mapped) are not load... | 2023-03-01T12:58:18 | 2023-03-02T13:30:41 | 2023-03-02T03:47:00 | ### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets import Dataset
ds = Datas... | speedcell4 | https://github.com/huggingface/datasets/issues/5597 | null | false |
1,604,919,993 | 5,596 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset | closed | [
"Apparently some JSON objects have a `\"labels\"` field. Since this field is not present in every object, you must specify all the fields types in the README.md\r\n\r\nEDIT: actually specifying the feature types doesn’t solve the issue, it raises an error because “labels” is missing in the data",
"We've updated t... | 2023-03-01T12:53:08 | 2023-12-05T03:22:00 | 2023-03-02T11:12:11 | ### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wr... | loubnabnl | https://github.com/huggingface/datasets/issues/5596 | null | false |
1,604,070,629 | 5,595 | Unpins sqlAlchemy | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5595). All of your documentation changes will be reflected on that endpoint.",
"It looks like this issue hasn't been fixed yet, so let's wait a bit more.",
"@lazarust thanks for your work, but unfortunately we cannot merge it... | 2023-03-01T01:33:45 | 2023-04-04T08:20:19 | 2023-04-04T08:19:14 | Closes #5477 | lazarust | https://github.com/huggingface/datasets/pull/5595 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5595",
"html_url": "https://github.com/huggingface/datasets/pull/5595",
"diff_url": "https://github.com/huggingface/datasets/pull/5595.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5595.patch",
"merged_at": null
} | true |
1,603,980,995 | 5,594 | Error while downloading the xtreme udpos dataset | closed | [
"Hi! I cannot reproduce this error on my machine.\r\n\r\nThe raised error could mean that one of the downloaded files is corrupted. To verify this is not the case, you can run `load_dataset` as follows:\r\n```python\r\ntrain_dataset = load_dataset('xtreme', 'udpos.English', split=\"train\", cache_dir=args.cache_dir... | 2023-02-28T23:40:53 | 2023-11-04T20:45:56 | 2023-07-24T14:22:18 | ### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtreme/udpos.Arabic/1.0.0/29f5d57a48779f37ccb75cb8708d1... | simran-khanuja | https://github.com/huggingface/datasets/issues/5594 | null | false |
1,603,619,124 | 5,592 | Fix docstring example | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-28T18:42:37 | 2023-02-28T19:26:33 | 2023-02-28T19:19:15 | Fixes #5581 to use the correct output for the `set_format` method. | stevhliu | https://github.com/huggingface/datasets/pull/5592 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5592",
"html_url": "https://github.com/huggingface/datasets/pull/5592",
"diff_url": "https://github.com/huggingface/datasets/pull/5592.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5592.patch",
"merged_at": "2023-02-28T19:19... | true |
1,603,571,407 | 5,591 | set dev version | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5591). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | 2023-02-28T18:09:05 | 2023-02-28T18:16:31 | 2023-02-28T18:09:15 | null | lhoestq | https://github.com/huggingface/datasets/pull/5591 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5591",
"html_url": "https://github.com/huggingface/datasets/pull/5591",
"diff_url": "https://github.com/huggingface/datasets/pull/5591.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5591.patch",
"merged_at": "2023-02-28T18:09... | true |
1,603,549,504 | 5,590 | Release: 2.10.1 | closed | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-02-28T17:58:11 | 2023-02-28T18:16:27 | 2023-02-28T18:06:08 | null | lhoestq | https://github.com/huggingface/datasets/pull/5590 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5590",
"html_url": "https://github.com/huggingface/datasets/pull/5590",
"diff_url": "https://github.com/huggingface/datasets/pull/5590.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5590.patch",
"merged_at": "2023-02-28T18:06... | true |
1,603,535,704 | 5,589 | Revert "pass the dataset features to the IterableDataset.from_generator" | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-28T17:52:04 | 2023-09-24T10:07:33 | 2023-03-21T14:18:18 | This reverts commit b91070b9c09673e2e148eec458036ab6a62ac042 (temporarily)
It hurts iterable dataset performance a lot (e.g. x4 slower because it encodes+decodes images unnecessarily). I think we need to fix this before re-adding it
cc @mariosasko @Hubert-Bonisseur | lhoestq | https://github.com/huggingface/datasets/pull/5589 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5589",
"html_url": "https://github.com/huggingface/datasets/pull/5589",
"diff_url": "https://github.com/huggingface/datasets/pull/5589.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5589.patch",
"merged_at": null
} | true |
1,603,304,766 | 5,588 | Flatten dataset on the fly in `save_to_disk` | closed | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-02-28T15:37:46 | 2023-02-28T17:28:35 | 2023-02-28T17:21:17 | Flatten a dataset on the fly in `save_to_disk` instead of doing it with `flatten_indices` to avoid creating an additional cache file.
(this is one of the sub-tasks in https://github.com/huggingface/datasets/issues/5507) | mariosasko | https://github.com/huggingface/datasets/pull/5588 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5588",
"html_url": "https://github.com/huggingface/datasets/pull/5588",
"diff_url": "https://github.com/huggingface/datasets/pull/5588.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5588.patch",
"merged_at": "2023-02-28T17:21... | true |
1,603,139,420 | 5,587 | Fix `sort` with indices mapping | closed | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-02-28T14:05:08 | 2023-02-28T17:28:57 | 2023-02-28T17:21:58 | Fixes the `key` range in the `query_table` call in `sort` to account for an indices mapping
Fix #5586 | mariosasko | https://github.com/huggingface/datasets/pull/5587 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5587",
"html_url": "https://github.com/huggingface/datasets/pull/5587",
"diff_url": "https://github.com/huggingface/datasets/pull/5587.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5587.patch",
"merged_at": "2023-02-28T17:21... | true |
1,602,961,544 | 5,586 | .sort() is broken when used after .filter(), only in 2.10.0 | closed | [
"Thanks for reporting and thanks @mariosasko for fixing ! We just did a patch release `2.10.1` with the fix"
] | 2023-02-28T12:18:09 | 2023-02-28T18:17:26 | 2023-02-28T17:21:59 | ### Describe the bug
Hi, thank you for your support!
It seems like the addition of multiple key sort (#5502) in 2.10.0 broke the `.sort()` method.
After filtering a dataset with `.filter()`, the `.sort()` seems to refer to the query_table index of the previous unfiltered dataset, resulting in an IndexError.
... | MattYoon | https://github.com/huggingface/datasets/issues/5586 | null | false |
1,602,190,030 | 5,585 | Cache is not transportable | closed | [
"Hi ! No the cache is not transportable in general. It will work on a shared filesystem if you use the same python environment, but not across machines/os/environments.\r\n\r\nIn particular, reloading cached datasets does work, but reloading cached processed datasets (e.g. from `map`) may not work. This is because ... | 2023-02-28T00:53:06 | 2023-02-28T21:26:52 | 2023-02-28T21:26:52 | ### Describe the bug
I would like to share cache between two machines (a Windows host machine and a WSL instance).
I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereby sharing the downloads.
I... | davidgilbertson | https://github.com/huggingface/datasets/issues/5585 | null | false |
1,601,821,808 | 5,584 | Unable to load coyo700M dataset | closed | [
"Hi @manuaero \r\n\r\nThank you for your interest in the COYO dataset.\r\n\r\nOur dataset provides the img-url and alt-text in the form of a parquet, so to utilize the coyo dataset you will need to download it directly.\r\n\r\nWe provide a [guide](https://github.com/kakaobrain/coyo-dataset/blob/main/download/README... | 2023-02-27T19:35:03 | 2023-02-28T07:27:59 | 2023-02-28T07:27:58 | ### Describe the bug
Seeing this error when downloading https://huggingface.co/datasets/kakaobrain/coyo-700m:
```ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.```
Full stack trace
```Downloading and preparing dataset parquet/kakaobrain--coy... | manuaero | https://github.com/huggingface/datasets/issues/5584 | null | false |
1,601,583,625 | 5,583 | Do no write index by default when exporting a dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-27T17:04:46 | 2023-02-28T13:52:15 | 2023-02-28T13:44:04 | Ensures all the writers that use Pandas for conversion (JSON, CSV, SQL) do not export `index` by default (https://github.com/huggingface/datasets/pull/5490 only did this for CSV) | mariosasko | https://github.com/huggingface/datasets/pull/5583 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5583",
"html_url": "https://github.com/huggingface/datasets/pull/5583",
"diff_url": "https://github.com/huggingface/datasets/pull/5583.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5583.patch",
"merged_at": "2023-02-28T13:44... | true |
1,600,932,092 | 5,582 | Add column_names to IterableDataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-27T10:50:07 | 2023-03-13T19:10:22 | 2023-03-13T19:03:32 | This PR closes #5383
* Add column_names property to IterableDataset
* Add multiple tests for this new property | patrickloeber | https://github.com/huggingface/datasets/pull/5582 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5582",
"html_url": "https://github.com/huggingface/datasets/pull/5582",
"diff_url": "https://github.com/huggingface/datasets/pull/5582.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5582.patch",
"merged_at": "2023-03-13T19:03... | true |
1,600,675,489 | 5,581 | [DOC] Mistaken docs on set_format | closed | [
"Thanks for reporting!"
] | 2023-02-27T08:03:09 | 2023-02-28T19:19:17 | 2023-02-28T19:19:17 | ### Describe the bug
https://huggingface.co/docs/datasets/v2.10.0/en/package_reference/main_classes#datasets.Dataset.set_format
<img width="700" alt="image" src="https://user-images.githubusercontent.com/36224762/221506973-ae2e3991-60a7-4d4e-99f8-965c6eb61e59.png">
While actually running it will result in:
<img w... | NightMachinery | https://github.com/huggingface/datasets/issues/5581 | null | false |
1,600,431,792 | 5,580 | Support cloud storage in load_dataset via fsspec | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Regarding the tests I think it should be possible to use the mockfs fixture, it allows to play with a dummy fsspec FileSystem with the \"mock://\" protocol.\r\n\r\n> However it requires some storage_options to be passed. Maybe it c... | 2023-02-27T04:06:05 | 2024-11-27T01:25:39 | 2023-03-11T00:55:40 | Closes https://github.com/huggingface/datasets/issues/5281
This PR uses fsspec to support datasets on cloud storage (tested manually with GCS). ETags are currently unsupported for cloud storage. In general, a much larger refactor could be done to just use fsspec for all schemes (ftp, http/s, s3, gcs) to unify the in... | dwyatte | https://github.com/huggingface/datasets/pull/5580 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5580",
"html_url": "https://github.com/huggingface/datasets/pull/5580",
"diff_url": "https://github.com/huggingface/datasets/pull/5580.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5580.patch",
"merged_at": "2023-03-11T00:55... | true |
1,599,732,211 | 5,579 | Add instructions to create `DataLoader` from augmented dataset in object detection guide | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5579). All of your documentation changes will be reflected on that endpoint.",
"I'm not sure we need this part as we provide a link to the notebook that shows how to train an object detection model, and this notebook instantiat... | 2023-02-25T14:53:17 | 2023-03-23T19:24:59 | 2023-03-23T19:24:50 | The following adds instructions on how to create a `DataLoader` from the guide on how to use object detection with augmentations (#4710). I am open to hearing any suggestions for improvement ! | Laurent2916 | https://github.com/huggingface/datasets/pull/5579 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5579",
"html_url": "https://github.com/huggingface/datasets/pull/5579",
"diff_url": "https://github.com/huggingface/datasets/pull/5579.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5579.patch",
"merged_at": null
} | true |
1,598,863,119 | 5,578 | Add `huggingface_hub` version to env cli command | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-24T15:37:43 | 2023-02-27T17:28:25 | 2023-02-27T17:21:09 | Add the `huggingface_hub` version to the `env` command's output. | mariosasko | https://github.com/huggingface/datasets/pull/5578 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5578",
"html_url": "https://github.com/huggingface/datasets/pull/5578",
"diff_url": "https://github.com/huggingface/datasets/pull/5578.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5578.patch",
"merged_at": "2023-02-27T17:21... | true |
1,598,587,665 | 5,577 | Cannot load `the_pile_openwebtext2` | closed | [
"Hi! I've merged a PR to use `int32` instead of `int8` for `reddit_scores`, so it should work now.\r\n\r\n"
] | 2023-02-24T13:01:48 | 2023-02-24T14:01:09 | 2023-02-24T14:01:09 | ### Describe the bug
I met the same bug mentioned in #3053 which is never fixed. Because several `reddit_scores` are larger than `int8` even `int16`. https://huggingface.co/datasets/the_pile_openwebtext2/blob/main/the_pile_openwebtext2.py#L62
### Steps to reproduce the bug
```python3
from datasets import load... | wjfwzzc | https://github.com/huggingface/datasets/issues/5577 | null | false |
1,598,582,744 | 5,576 | I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers. | closed | [
"Duplicated issue."
] | 2023-02-24T12:57:49 | 2023-02-24T12:58:31 | 2023-02-24T12:58:18 | I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers.
I worked aro... | wjfwzzc | https://github.com/huggingface/datasets/issues/5576 | null | false |
1,598,396,552 | 5,575 | Metadata for each column | open | [
"Hi! Indeed it would be useful to support this. PyArrow natively supports schema-level and column-level metadata, so implementing this should be straightforward. The API I have in mind would work as follows:\r\n```python\r\ncol_feature = Value(\"string\", metadata=\"Some column-level metadata\")\r\n\r\nfeatures = F... | 2023-02-24T10:53:44 | 2024-01-05T21:48:35 | null | ### Feature request
Being able to put some metadata for each column as a string or any other type.
### Motivation
I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of preprocessing and see which on... | parsa-ra | https://github.com/huggingface/datasets/issues/5575 | null | false |
1,598,104,691 | 5,574 | c4 dataset streaming fails with `FileNotFoundError` | closed | [
"Also encountering this issue for every dataset I try to stream! Installed datasets from main:\r\n```\r\n- `datasets` version: 2.10.1.dev0\r\n- Platform: macOS-13.1-arm64-arm-64bit\r\n- Python version: 3.9.13\r\n- PyArrow version: 10.0.1\r\n- Pandas version: 1.5.2\r\n```\r\n\r\nRepro:\r\n```python\r\nfrom datasets ... | 2023-02-24T07:57:32 | 2023-12-18T07:32:32 | 2023-02-27T04:03:38 | ### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", ... | krasserm | https://github.com/huggingface/datasets/issues/5574 | null | false |
1,597,400,836 | 5,573 | Use soundfile for mp3 decoding instead of torchaudio | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@mariosasko thank you for the review! do you have any idea why `test_hash_torch_tensor` fails on \"ubuntu-latest deps-minimum\"? I removed the `torchaudio<0.12.0` test dependency so it uses the latest `torch` now, might it be connect... | 2023-02-23T19:19:44 | 2023-02-28T20:25:14 | 2023-02-28T20:16:02 | I've removed `torchaudio` completely and switched to use `soundfile` for everything. With the new version of `soundfile` package this should work smoothly because the `libsndfile` C library is bundled, in Linux wheels too.
Let me know if you think it's too harsh and we should continue to support `torchaudio` decodi... | polinaeterna | https://github.com/huggingface/datasets/pull/5573 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5573",
"html_url": "https://github.com/huggingface/datasets/pull/5573",
"diff_url": "https://github.com/huggingface/datasets/pull/5573.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5573.patch",
"merged_at": "2023-02-28T20:16... | true |
1,597,257,624 | 5,572 | Datasets 2.10.0 does not reuse the dataset cache | closed | [] | 2023-02-23T17:28:11 | 2023-02-23T18:03:55 | 2023-02-23T18:03:55 | ### Describe the bug
download_mode="reuse_dataset_if_exists" will always consider that a dataset doesn't exist.
Specifically, upon losing an internet connection trying to load a dataset for a second time in ten seconds, a connection error results, showing a breakpoint of:
```
File ~/jupyterlab/.direnv/python-... | lsb | https://github.com/huggingface/datasets/issues/5572 | null | false |
1,597,198,953 | 5,571 | load_dataset fails for JSON in windows | closed | [
"Hi! \r\n\r\nYou need to pass an input json file explicitly as `data_files` to `load_dataset` to avoid this error:\r\n```python\r\n ds = load_dataset(\"json\", data_files=args.input_json)\r\n```\r\n\r\n",
"Thanks it worked!"
] | 2023-02-23T16:50:11 | 2023-02-24T13:21:47 | 2023-02-24T13:21:47 | ### Describe the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local PyCharm - the location of python file is di... | abinashsahu | https://github.com/huggingface/datasets/issues/5571 | null | false |
1,597,190,926 | 5,570 | load_dataset gives FileNotFoundError on imagenet-1k if license is not accepted on the hub | closed | [
"Hi, thanks for the feedback! Would it help to add a tip or note saying the dataset is gated and you need to accept the license before downloading it?",
"The error is now more informative:\r\n```\r\nFileNotFoundError: Couldn't find a dataset script at /content/imagenet-1k/imagenet-1k.py or any data file in the sa... | 2023-02-23T16:44:32 | 2023-07-24T15:18:50 | 2023-07-24T15:18:50 | ### Describe the bug
When calling ```load_dataset('imagenet-1k')``` FileNotFoundError is raised, if not logged in and if logged in with huggingface-cli but not having accepted the licence on the hub. There is no error once accepting.
### Steps to reproduce the bug
```
from datasets import load_dataset
imagenet =... | buoi | https://github.com/huggingface/datasets/issues/5570 | null | false |
1,597,132,383 | 5,569 | pass the dataset features to the IterableDataset.from_generator function | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-23T16:06:04 | 2023-02-24T14:06:37 | 2023-02-23T18:15:16 | [5558](https://github.com/huggingface/datasets/issues/5568) | bruno-hays | https://github.com/huggingface/datasets/pull/5569 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5569",
"html_url": "https://github.com/huggingface/datasets/pull/5569",
"diff_url": "https://github.com/huggingface/datasets/pull/5569.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5569.patch",
"merged_at": "2023-02-23T18:15... | true |
1,596,900,532 | 5,568 | dataset.to_iterable_dataset() loses useful info like dataset features | closed | [
"Hi ! Oh good catch. I think the features should be passed to `IterableDataset.from_generator()` in `to_iterable_dataset()` indeed.\r\n\r\nSetting this as a good first issue if someone would like to contribute, otherwise we can take care of it :)",
"#self-assign",
"seems like the feature parameter is missing fr... | 2023-02-23T13:45:33 | 2023-02-24T13:22:36 | 2023-02-24T13:22:36 | ### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata like the features.
These metadata are useful if you want to interleav... | bruno-hays | https://github.com/huggingface/datasets/issues/5568 | null | false |
1,595,916,674 | 5,566 | Directly reading parquet files in a s3 bucket from the load_dataset method | open | [
"Hi ! I think is in the scope of this other issue: to https://github.com/huggingface/datasets/issues/5281 "
] | 2023-02-22T22:13:40 | 2023-02-23T11:03:29 | null | ### Feature request
Right now, we have to read the get the parquet file to the local storage. So having ability to read given the bucket directly address would be benificial
### Motivation
In a production set up, this feature can help us a lot. So we do not need move training datafiles in between storage.
### Yo... | shamanez | https://github.com/huggingface/datasets/issues/5566 | null | false |
1,595,281,752 | 5,565 | Add writer_batch_size for ArrowBasedBuilder | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-22T15:09:30 | 2023-03-10T13:53:03 | 2023-03-10T13:45:43 | This way we can control the size of the record_batches/row_groups of arrow/parquet files.
This can be useful for `datasets-server` to keep control of the row groups size which can affect random access performance for audio/image/video datasets
Right now having 1,000 examples per row group cause some image dataset... | lhoestq | https://github.com/huggingface/datasets/pull/5565 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5565",
"html_url": "https://github.com/huggingface/datasets/pull/5565",
"diff_url": "https://github.com/huggingface/datasets/pull/5565.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5565.patch",
"merged_at": "2023-03-10T13:45... | true |
1,595,064,698 | 5,564 | Set dev version | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5564). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | 2023-02-22T13:00:09 | 2023-02-22T13:09:26 | 2023-02-22T13:00:25 | null | lhoestq | https://github.com/huggingface/datasets/pull/5564 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5564",
"html_url": "https://github.com/huggingface/datasets/pull/5564",
"diff_url": "https://github.com/huggingface/datasets/pull/5564.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5564.patch",
"merged_at": "2023-02-22T13:00... | true |
1,595,049,025 | 5,563 | Release: 2.10.0 | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-22T12:48:52 | 2023-02-22T13:05:55 | 2023-02-22T12:56:48 | null | lhoestq | https://github.com/huggingface/datasets/pull/5563 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5563",
"html_url": "https://github.com/huggingface/datasets/pull/5563",
"diff_url": "https://github.com/huggingface/datasets/pull/5563.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5563.patch",
"merged_at": "2023-02-22T12:56... | true |
1,594,625,539 | 5,562 | Update csv.py | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Removed it :)",
"Changed it :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_format... | 2023-02-22T07:56:10 | 2023-02-23T11:07:49 | 2023-02-23T11:00:58 | Removed mangle_dup_cols=True from BuilderConfig.
It triggered following deprecation warning:
/usr/local/lib/python3.8/dist-packages/datasets/download/streaming_download_manager.py:776: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the ... | xdoubleu | https://github.com/huggingface/datasets/pull/5562 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5562",
"html_url": "https://github.com/huggingface/datasets/pull/5562",
"diff_url": "https://github.com/huggingface/datasets/pull/5562.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5562.patch",
"merged_at": "2023-02-23T11:00... | true |
1,593,862,388 | 5,561 | Add pre-commit config yaml file to enable automatic code formatting | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Better yet have someone enable pre-commit CI https://pre-commit.ci/ and it will apply the pre-commit fixes to the PR automatically as an additional commit.",
"@Skylion007 hi! I agree with @nateraw here, I'd better not force to use ... | 2023-02-21T17:35:07 | 2023-02-28T15:37:22 | 2023-02-23T18:23:29 | @huggingface/datasets do you think it would be useful? Motivation - sometimes PRs are like 30% "fix: style" commits :)
If so - I need to double check the config but for me locally it works as expected. | polinaeterna | https://github.com/huggingface/datasets/pull/5561 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5561",
"html_url": "https://github.com/huggingface/datasets/pull/5561",
"diff_url": "https://github.com/huggingface/datasets/pull/5561.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5561.patch",
"merged_at": "2023-02-23T18:23... | true |
1,593,809,978 | 5,560 | Ensure last tqdm update in `map` | closed | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-02-21T16:56:17 | 2023-02-21T18:26:23 | 2023-02-21T18:19:09 | This PR modifies `map` to:
* ensure the TQDM bar gets the last progress update
* when a map function fails, avoid throwing a chained exception in the single-proc mode | mariosasko | https://github.com/huggingface/datasets/pull/5560 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5560",
"html_url": "https://github.com/huggingface/datasets/pull/5560",
"diff_url": "https://github.com/huggingface/datasets/pull/5560.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5560.patch",
"merged_at": "2023-02-21T18:19... | true |
1,593,676,489 | 5,559 | Fix map suffix_template | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-21T15:26:26 | 2023-02-21T17:21:37 | 2023-02-21T17:14:29 | #5455 introduced a small bug that lead `map` to ignore the `suffix_template` argument and not put suffixes to cached files in multiprocessing.
I fixed this and also improved a few things:
- regarding logging: "Loading cached processed dataset" is now logged only once even in multiprocessing (it used to be logged ... | lhoestq | https://github.com/huggingface/datasets/pull/5559 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5559",
"html_url": "https://github.com/huggingface/datasets/pull/5559",
"diff_url": "https://github.com/huggingface/datasets/pull/5559.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5559.patch",
"merged_at": "2023-02-21T17:14... | true |
1,593,655,815 | 5,558 | Remove instructions for `ffmpeg` system package installation on Colab | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-21T15:13:36 | 2023-03-01T13:46:04 | 2023-02-23T13:50:27 | Colab now has Ubuntu 20.04 which already has `ffmpeg` of required (>4) version. | polinaeterna | https://github.com/huggingface/datasets/pull/5558 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5558",
"html_url": "https://github.com/huggingface/datasets/pull/5558",
"diff_url": "https://github.com/huggingface/datasets/pull/5558.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5558.patch",
"merged_at": "2023-02-23T13:50... | true |
1,593,545,324 | 5,557 | Add filter desc | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-21T14:04:42 | 2023-02-21T14:19:54 | 2023-02-21T14:12:39 | Otherwise it would show a `Map` progress bar, since it uses `map` under the hood | lhoestq | https://github.com/huggingface/datasets/pull/5557 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5557",
"html_url": "https://github.com/huggingface/datasets/pull/5557",
"diff_url": "https://github.com/huggingface/datasets/pull/5557.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5557.patch",
"merged_at": "2023-02-21T14:12... | true |
1,593,246,936 | 5,556 | Use default audio resampling type | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-21T10:45:50 | 2023-02-21T12:49:50 | 2023-02-21T12:42:52 | ...instead of relying on the optional librosa dependency `resampy`.
It was only used for `_decode_non_mp3_file_like` anyway and not for the other ones - removing it fixes consistency between decoding methods (except torchaudio decoding)
Therefore I think it is a better solution than adding `resampy` as a dependen... | lhoestq | https://github.com/huggingface/datasets/pull/5556 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5556",
"html_url": "https://github.com/huggingface/datasets/pull/5556",
"diff_url": "https://github.com/huggingface/datasets/pull/5556.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5556.patch",
"merged_at": "2023-02-21T12:42... | true |
1,592,469,938 | 5,555 | `.shuffle` throwing error `ValueError: Protocol not known: parent` | open | [
"Hi ! The indices mapping is written in the same cachedirectory as your dataset.\r\n\r\nCan you run this to show your current cache directory ?\r\n```python\r\nprint(train_dataset.cache_files)\r\n```",
"```\r\n[{'filename': '.../train/dataset.arrow'}, {'filename': '.../train/dataset.arrow'}]\r\n```\r\n\r\nThese a... | 2023-02-20T21:33:45 | 2023-02-27T09:23:34 | null | ### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/dataset... | prabhakar267 | https://github.com/huggingface/datasets/issues/5555 | null | false |
1,592,285,062 | 5,554 | Add resampy dep | closed | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-02-20T18:15:43 | 2023-09-24T10:07:29 | 2023-02-21T12:43:38 | In librosa 0.10 they removed the `resmpy` dependency and set it to optional.
However it is necessary for resampling. I added it to the "audio" extra dependencies. | lhoestq | https://github.com/huggingface/datasets/pull/5554 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5554",
"html_url": "https://github.com/huggingface/datasets/pull/5554",
"diff_url": "https://github.com/huggingface/datasets/pull/5554.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5554.patch",
"merged_at": null
} | true |
1,592,236,998 | 5,553 | improved message error row formatting | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-20T17:29:14 | 2023-02-21T13:08:25 | 2023-02-21T12:58:12 | Solves #5539 | Plutone11011 | https://github.com/huggingface/datasets/pull/5553 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5553",
"html_url": "https://github.com/huggingface/datasets/pull/5553",
"diff_url": "https://github.com/huggingface/datasets/pull/5553.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5553.patch",
"merged_at": "2023-02-21T12:58... | true |
1,592,186,703 | 5,552 | Make tiktoken tokenizers hashable | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-20T16:50:09 | 2023-02-21T13:20:42 | 2023-02-21T13:13:05 | Fix for https://discord.com/channels/879548962464493619/1075729627546406912/1075729627546406912
| mariosasko | https://github.com/huggingface/datasets/pull/5552 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5552",
"html_url": "https://github.com/huggingface/datasets/pull/5552",
"diff_url": "https://github.com/huggingface/datasets/pull/5552.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5552.patch",
"merged_at": "2023-02-21T13:13... | true |
1,592,140,836 | 5,551 | Suggest scikit-learn instead of sklearn | closed | [
"good catch!",
"_The documentation is not available anymore as the PR was closed or merged._",
"The test fail is unrelated to this PR and fixed on `main` - merging :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: ... | 2023-02-20T16:16:57 | 2023-02-21T13:27:57 | 2023-02-21T13:21:07 | This is kinda unimportant fix but, the suggested `pip install sklearn` does not work.
The current error message if sklearn is not installed:
```
ImportError: To be able to use [dataset name], you need to install the following dependency: sklearn.
Please install it using 'pip install sklearn' for instance.
```
... | osbm | https://github.com/huggingface/datasets/pull/5551 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5551",
"html_url": "https://github.com/huggingface/datasets/pull/5551",
"diff_url": "https://github.com/huggingface/datasets/pull/5551.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5551.patch",
"merged_at": "2023-02-21T13:21... | true |
1,591,409,475 | 5,550 | Resolve four broken refs in the docs | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"See the resolved changes [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5550/en/package_reference/main_classes#datasets.Dataset.class_encode_column), [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5550/en/pa... | 2023-02-20T08:52:11 | 2023-02-20T15:16:13 | 2023-02-20T15:09:13 | Hello!
## Pull Request overview
* Resolve 4 broken references in the docs
## The problems
Two broken references [here](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.class_encode_column):
:\r\n def __init__(self, features=None, device=None, **jnp_array_kwargs):\r\n super().__init__(features=features)\r\n import jax\r\n from jaxlib.xla_extension import Devi... | 2023-02-18T20:57:40 | 2023-02-21T16:10:55 | 2023-02-21T16:04:03 | ## What's in this PR?
After exploring for a while the JAX integration in 🤗`datasets`, I found out that, even though JAX prioritizes the TPU and GPU as the default device when available, the `JaxFormatter` doesn't let you specify the device where you want to place the `jax.Array`s in case you don't want to rely on J... | alvarobartt | https://github.com/huggingface/datasets/pull/5547 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5547",
"html_url": "https://github.com/huggingface/datasets/pull/5547",
"diff_url": "https://github.com/huggingface/datasets/pull/5547.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5547.patch",
"merged_at": "2023-02-21T16:04... | true |
1,590,346,349 | 5,546 | Downloaded datasets do not cache at $HF_HOME | closed | [
"Hi ! Can you make sure you set `HF_HOME` before importing `datasets` ?\r\n\r\nThen you can print\r\n```python\r\nprint(datasets.config.HF_CACHE_HOME)\r\nprint(datasets.config.HF_DATASETS_CACHE)\r\n```"
] | 2023-02-18T13:30:35 | 2023-07-24T14:22:43 | 2023-07-24T14:22:43 | ### Describe the bug
In the huggingface course (https://huggingface.co/course/chapter3/2?fw=pt) it said that if we set HF_HOME, downloaded datasets would be cached at specified address but it does not. downloaded models from checkpoint names are downloaded and cached at HF_HOME but this is not the case for datasets, t... | ErfanMoosaviMonazzah | https://github.com/huggingface/datasets/issues/5546 | null | false |
1,590,315,972 | 5,545 | Added return methods for URL-references to the pushed dataset | open | [
"Hi ! Maybe we'd need to align with `transformers` and other libraries that implement `push_to_hub` to agree on what it should return.\r\n\r\ne.g. in `transformers` the typing says it returns a string, but in practice it returns a `CommitInfo`.\r\n\r\nTherefore I'd not add an output to `push_to_hub` here unless we ... | 2023-02-18T11:26:25 | 2023-12-18T16:57:56 | null | Hi,
I was missing the ability to easily open the pushed dataset and it seemed like a quick fix.
Maybe we also want to log this info somewhere, but let me know if I need to add that too.
Cheers,
David | davidberenstein1957 | https://github.com/huggingface/datasets/pull/5545 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5545",
"html_url": "https://github.com/huggingface/datasets/pull/5545",
"diff_url": "https://github.com/huggingface/datasets/pull/5545.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5545.patch",
"merged_at": null
} | true |
1,588,951,379 | 5,543 | the pile datasets url seems to change back | closed | [
"Thanks for reporting, @wjfwzzc.\r\n\r\nI am transferring this issue to the corresponding dataset on the Hub: https://huggingface.co/datasets/bookcorpusopen/discussions/1",
"Thank you. All fixes are done:\r\n- [x] https://huggingface.co/datasets/bookcorpusopen/discussions/2\r\n- [x] https://huggingface.co/dataset... | 2023-02-17T08:40:11 | 2023-02-21T06:37:00 | 2023-02-20T08:41:33 | ### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_dataset("bookcorpusopen")
```
shows
```python3
... | wjfwzzc | https://github.com/huggingface/datasets/issues/5543 | null | false |
1,588,633,724 | 5,542 | Avoid saving sparse ChunkedArrays in pyarrow tables | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-17T01:52:38 | 2023-02-17T19:20:49 | 2023-02-17T11:12:32 | Fixes https://github.com/huggingface/datasets/issues/5541 | marioga | https://github.com/huggingface/datasets/pull/5542 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5542",
"html_url": "https://github.com/huggingface/datasets/pull/5542",
"diff_url": "https://github.com/huggingface/datasets/pull/5542.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5542.patch",
"merged_at": "2023-02-17T11:12... | true |
1,588,633,555 | 5,541 | Flattening indices in selected datasets is extremely inefficient | closed | [
"Running the script above on the branch https://github.com/huggingface/datasets/pull/5542 results in the expected behaviour:\r\n```\r\nNum chunks for original ds: 1\r\nOriginal ds save/load\r\nsave_to_disk -- RAM memory used: 0.671875 MB -- Total time: 0.255265 s\r\nload_from_disk -- RAM memory used: 42.796875 MB -... | 2023-02-17T01:52:24 | 2023-02-22T13:15:20 | 2023-02-17T11:12:33 | ### Describe the bug
If we perform a `select` (or `shuffle`, `train_test_split`, etc.) operation on a dataset , we end up with a dataset with an `indices_table`. Currently, flattening such dataset consumes a lot of memory and the resulting flat dataset contains ChunkedArrays with as many chunks as there are rows. Thi... | marioga | https://github.com/huggingface/datasets/issues/5541 | null | false |
1,588,438,344 | 5,540 | Tutorial for creating a dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-16T22:09:35 | 2023-02-17T18:50:46 | 2023-02-17T18:41:28 | A tutorial for creating datasets based on the folder-based builders and `from_dict` and `from_generator` methods. I've also mentioned loading scripts as a next step, but I think we should keep the tutorial focused on the low-code methods. Let me know what you think! 🙂 | stevhliu | https://github.com/huggingface/datasets/pull/5540 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5540",
"html_url": "https://github.com/huggingface/datasets/pull/5540",
"diff_url": "https://github.com/huggingface/datasets/pull/5540.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5540.patch",
"merged_at": "2023-02-17T18:41... | true |
1,587,970,083 | 5,539 | IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number | closed | [
"Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:\r\n```python\r\nfrom datasets import load_dataset\r\nimport torch\r\n\r\ndataset = load_dataset(\"lambdalabs/pokemon-blip-captions\", split='t... | 2023-02-16T16:08:51 | 2023-02-22T10:30:30 | 2023-02-21T13:03:57 | ### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 501, in format_row
return _unnest(formatted_batch)
File "<path>/lib/py... | aalbersk | https://github.com/huggingface/datasets/issues/5539 | null | false |
1,587,732,596 | 5,538 | load_dataset in seaborn is not working for me. getting this error. | closed | [
"Hi! `seaborn`'s `load_dataset` pulls datasets from [here](https://github.com/mwaskom/seaborn-data) and not from our Hub, so this issue is not related to our library in any way and should be reported in their repo instead."
] | 2023-02-16T14:01:58 | 2023-02-16T14:44:36 | 2023-02-16T14:44:36 | TimeoutError Traceback (most recent call last)
~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args)
1345 try:
-> 1346 h.request(req.get_method(), req.selector, req.data, headers,
1347 encode_chu... | reemaranibarik | https://github.com/huggingface/datasets/issues/5538 | null | false |
1,587,567,464 | 5,537 | Increase speed of data files resolution | closed | [
"#self-assign",
"You were right, if `self.dir_cache` is not None in glob, it is exactly the same as what is returned by find, at least for all the tests we have, and some extended evaluation I did across a random sample of about 1000 datasets. \r\n\r\nThanks for the nice hints, and let me know if this is not exac... | 2023-02-16T12:11:45 | 2023-12-15T13:12:31 | 2023-12-15T13:12:31 | Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step.
`datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on all the data files.
This comes from `res... | lhoestq | https://github.com/huggingface/datasets/issues/5537 | null | false |
1,586,930,643 | 5,536 | Failure to hash function when using .map() | closed | [
"Hi ! `enc` is not hashable:\r\n```python\r\nimport tiktoken\r\nfrom datasets.fingerprint import Hasher\r\n\r\nenc = tiktoken.get_encoding(\"gpt2\")\r\nHasher.hash(enc)\r\n# raises TypeError: cannot pickle 'builtins.CoreBPE' object\r\n```\r\nIt happens because it's not picklable, and because of that it's not possib... | 2023-02-16T03:12:07 | 2023-09-08T21:06:01 | 2023-02-16T14:56:41 | ### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and ca... | venzen | https://github.com/huggingface/datasets/issues/5536 | null | false |
1,586,520,369 | 5,535 | Add JAX-formatting documentation | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Awesome thank you !\r\n> \r\n> Could you also explain how to use certain types like ClassLabel, Image or Audio with jax ? You can get a lot of inspiration from the \"Other feature types\" section in the [PyTorch page](https://huggi... | 2023-02-15T20:35:11 | 2023-02-20T10:39:42 | 2023-02-20T10:32:39 | ## What's in this PR?
As a follow-up of #5522, I've created this entry in the documentation to explain how to use `.with_format("jax")` and why is it useful.
@lhoestq Feel free to drop any feedback and/or suggestion, as probably more useful features can be included there! | alvarobartt | https://github.com/huggingface/datasets/pull/5535 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5535",
"html_url": "https://github.com/huggingface/datasets/pull/5535",
"diff_url": "https://github.com/huggingface/datasets/pull/5535.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5535.patch",
"merged_at": "2023-02-20T10:32... | true |
1,586,177,862 | 5,534 | map() breaks at certain dataset size when using Array3D | open | [
"Hi! This code works for me locally or in Colab. What's the output of `python -c \"import pyarrow as pa; print(pa.__version__)\"` when you run it inside your environment?",
"Thanks for looking into this!\r\nThe output of `python -c \"import pyarrow as pa; print(pa.__version__)\"` is:\r\n```\r\n11.0.0\r\n```\r\n\... | 2023-02-15T16:34:25 | 2023-03-03T16:31:33 | null | ### Describe the bug
`map()` magically breaks when using a `Array3D` feature and mapping it. I created a very simple dummy dataset (see below). When filtering it down to 95 elements I can apply map, but it breaks when filtering it down to just 96 entries with the following exception:
```
Traceback (most recent cal... | ArneBinder | https://github.com/huggingface/datasets/issues/5534 | null | false |
1,585,885,871 | 5,533 | Add reduce function | closed | [
"I agree that it would be a good idea to introduce a `combiner` argument in another PR.\r\n\r\nI did take quite a lot of inspiration from the implementation of `map`, but it did not seem obvious how to resuse `map` for the implementation. Do you have any suggestions, i could give a try?\r\n\r\nThose were exactly m... | 2023-02-15T13:44:01 | 2024-11-25T14:33:27 | 2023-02-28T14:46:12 | This PR closes #5496 .
I tried to imitate the `reduce`-method from `functools`, i.e. the function input must be a binary operation. I assume that the input type has an empty element, i.e. `input_type()` is defined, as the acumulant is instantiated as this object - im not sure that is this a reasonable assumption?
... | AJDERS | https://github.com/huggingface/datasets/pull/5533 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5533",
"html_url": "https://github.com/huggingface/datasets/pull/5533",
"diff_url": "https://github.com/huggingface/datasets/pull/5533.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5533.patch",
"merged_at": null
} | true |
1,584,505,128 | 5,532 | train_test_split in arrow_dataset does not ensure to keep single classes in test set | closed | [
"Hi! You can get this behavior by specifying `stratify_by_column=\"label\"` in `train_test_split`.\r\n\r\nThis is the full example:\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset, ClassLabel\r\n\r\ndata = [\r\n {'label': 0, 'text': \"example1\"},\r\n {'label': 1, 'text': \"example2\"},\r\n... | 2023-02-14T16:52:29 | 2023-02-15T16:09:19 | 2023-02-15T16:09:19 | ### Describe the bug
When I have a dataset with very few (e.g. 1) examples per class and I call the train_test_split function on it, sometimes the single class will be in the test set. thus will never be considered for training.
### Steps to reproduce the bug
```
import numpy as np
from datasets import Dataset
... | Ulipenitz | https://github.com/huggingface/datasets/issues/5532 | null | false |
1,584,387,276 | 5,531 | Invalid Arrow data from JSONL | open | [] | 2023-02-14T15:39:49 | 2023-02-14T15:46:09 | null | This code fails:
```python
from datasets import Dataset
ds = Dataset.from_json(path_to_file)
ds.data.validate()
```
raises
```python
ArrowInvalid: Column 2: In chunk 1: Invalid: Struct child array #3 invalid: Invalid: Length spanned by list offsets (4064) larger than values array (length 4063)
```
This ... | lhoestq | https://github.com/huggingface/datasets/issues/5531 | null | false |
1,582,938,241 | 5,530 | Add missing license in `NumpyFormatter` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-13T19:33:23 | 2023-02-14T14:40:41 | 2023-02-14T12:23:58 | ## What's in this PR?
As discussed with @lhoestq in https://github.com/huggingface/datasets/pull/5522, the license for `NumpyFormatter` at `datasets/formatting/np_formatter.py` was missing, but present on the rest of the `formatting/*.py` files. So this PR is basically to include it there. | alvarobartt | https://github.com/huggingface/datasets/pull/5530 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5530",
"html_url": "https://github.com/huggingface/datasets/pull/5530",
"diff_url": "https://github.com/huggingface/datasets/pull/5530.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5530.patch",
"merged_at": "2023-02-14T12:23... | true |
1,582,501,233 | 5,529 | Fix `datasets.load_from_disk`, `DatasetDict.load_from_disk` and `Dataset.load_from_disk` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hmm, should this also be updated in `Dataset.load_from_disk` and `DatasetDict.load_from_disk`? https://github.com/huggingface/datasets/pull/5466 As there the paths are joined using `Path(..., ...)` and it won't work on Windows OS acc... | 2023-02-13T14:54:55 | 2023-02-23T18:14:32 | 2023-02-23T18:05:26 | ## What's in this PR?
After playing around a little bit with 🤗`datasets` in Google Cloud Storage (GCS), I found out some things that should be fixed IMO in the code:
* `datasets.load_from_disk` is not checking whether `state.json` is there too when trying to load a `Dataset`, just `dataset_info.json` is checked
... | alvarobartt | https://github.com/huggingface/datasets/pull/5529 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5529",
"html_url": "https://github.com/huggingface/datasets/pull/5529",
"diff_url": "https://github.com/huggingface/datasets/pull/5529.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5529.patch",
"merged_at": "2023-02-23T18:05... | true |
1,582,195,085 | 5,528 | Push to hub in a pull request | open | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5528). All of your documentation changes will be reflected on that endpoint.",
"It seems that the parameter `create_pr` is available for [`0.8.0`](https://huggingface.co/docs/huggingface_hub/v0.8.1/en/package_reference/hf_api#h... | 2023-02-13T11:43:47 | 2023-10-06T21:58:02 | null | Fixes #5492.
Introduce new kwarg `create_pr` in `push_to_hub`, which is passed to `HFapi.upload_file`. | AJDERS | https://github.com/huggingface/datasets/pull/5528 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5528",
"html_url": "https://github.com/huggingface/datasets/pull/5528",
"diff_url": "https://github.com/huggingface/datasets/pull/5528.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5528.patch",
"merged_at": null
} | true |
1,581,228,531 | 5,527 | Fix benchmarks CI - pin protobuf | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-12T11:51:25 | 2023-02-13T10:29:03 | 2023-02-13T09:24:16 | fix https://github.com/huggingface/datasets/actions/runs/4156059127/jobs/7189576331 | lhoestq | https://github.com/huggingface/datasets/pull/5527 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5527",
"html_url": "https://github.com/huggingface/datasets/pull/5527",
"diff_url": "https://github.com/huggingface/datasets/pull/5527.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5527.patch",
"merged_at": "2023-02-13T09:24... | true |
1,580,488,133 | 5,526 | Allow loading/saving of FAISS index using fsspec | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the quick review! I updated the code with your suggestion",
"Thanks for the quick review @albertvillanova! I updated the code with your suggestions",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n... | 2023-02-10T23:37:14 | 2023-03-27T15:26:46 | 2023-03-27T15:18:20 | Fixes #5428
Allow loading/saving of FAISS index using fsspec:
1. Simply use BufferedIOWriter/Reader to Read/Write indices on fsspec stream.
2. Needed `mockfs` in the test, so I took it out of the `TestCase`. Let me know if that makes sense.
I can work on the documentation once the code changes are approved.
| Dref360 | https://github.com/huggingface/datasets/pull/5526 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5526",
"html_url": "https://github.com/huggingface/datasets/pull/5526",
"diff_url": "https://github.com/huggingface/datasets/pull/5526.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5526.patch",
"merged_at": "2023-03-27T15:18... | true |
1,580,342,729 | 5,525 | TypeError: Couldn't cast array of type string to null | closed | [
"Thanks for reporting, @TJ-Solergibert.\r\n\r\nWe cannot access your Colab notebook: `There was an error loading this notebook. Ensure that the file is accessible and try again.`\r\nCould you please make it publicly accessible?\r\n",
"I swear it's public, I've checked the settings and I've been able to open it in... | 2023-02-10T21:12:36 | 2023-02-14T17:41:08 | 2023-02-14T09:35:49 | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | TJ-Solergibert | https://github.com/huggingface/datasets/issues/5525 | null | false |
1,580,219,454 | 5,524 | [INVALID PR] | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2023-02-10T19:35:50 | 2023-02-10T19:51:45 | 2023-02-10T19:49:12 | Hi to whoever is reading this! 🤗
## What's in this PR?
~~Basically, I've removed the 🤗`datasets` installation as `python -m pip install ".[quality]" in the `check_code_quality` job in `.github/workflows/ci.yaml`, as we don't need to install the whole package to run the CI, unless that's done on purpose e.g. to ... | alvarobartt | https://github.com/huggingface/datasets/pull/5524 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5524",
"html_url": "https://github.com/huggingface/datasets/pull/5524",
"diff_url": "https://github.com/huggingface/datasets/pull/5524.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5524.patch",
"merged_at": null
} | true |
1,580,193,015 | 5,523 | Checking that split name is correct happens only after the data is downloaded | open | [] | 2023-02-10T19:13:03 | 2023-02-10T19:14:50 | null | ### Describe the bug
Verification of split names (=indexing data by split) happens after downloading the data. So when the split name is incorrect, users learn about that only after the data is fully downloaded, for large datasets it might take a lot of time.
### Steps to reproduce the bug
Load any dataset with rand... | polinaeterna | https://github.com/huggingface/datasets/issues/5523 | null | false |
1,580,183,124 | 5,522 | Minor changes in JAX-formatting docstrings & type-hints | closed | [
"P.S. For more context, I'm currently exploring the integration of 🤗`datasets` with JAX, so in case you need any help or want me to try something specific just let me know! (`jnp.asarray`/`jnp.array(..., copy=False)` still no zero-copy 😭)",
"_The documentation is not available anymore as the PR was closed or me... | 2023-02-10T19:05:00 | 2023-02-15T14:48:27 | 2023-02-15T13:19:06 | Hi to whoever is reading this! 🤗
## What's in this PR?
I was exploring the code regarding the `JaxFormatter` implemented in 🤗`datasets`, and found some things that IMO could be changed. Those are mainly regarding the docstrings and the type-hints based on `jax`'s 0.4.1 release where `jax.Array` was introduced a... | alvarobartt | https://github.com/huggingface/datasets/pull/5522 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5522",
"html_url": "https://github.com/huggingface/datasets/pull/5522",
"diff_url": "https://github.com/huggingface/datasets/pull/5522.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5522.patch",
"merged_at": "2023-02-15T13:19... | true |
1,578,418,289 | 5,521 | Fix bug when casting empty array to class labels | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2023-02-09T18:47:59 | 2023-02-13T20:40:48 | 2023-02-12T11:17:17 | Fix https://github.com/huggingface/datasets/issues/5520. | marioga | https://github.com/huggingface/datasets/pull/5521 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5521",
"html_url": "https://github.com/huggingface/datasets/pull/5521",
"diff_url": "https://github.com/huggingface/datasets/pull/5521.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5521.patch",
"merged_at": "2023-02-12T11:17... | true |
1,578,417,074 | 5,520 | ClassLabel.cast_storage raises TypeError when called on an empty IntegerArray | closed | [] | 2023-02-09T18:46:52 | 2023-02-12T11:17:18 | 2023-02-12T11:17:18 | ### Describe the bug
`ClassLabel.cast_storage` raises `TypeError` when called on an empty `IntegerArray`.
### Steps to reproduce the bug
Minimal steps:
```python
import pyarrow as pa
from datasets import ClassLabel
ClassLabel(names=['foo', 'bar']).cast_storage(pa.array([], pa.int64()))
```
In practice, thi... | marioga | https://github.com/huggingface/datasets/issues/5520 | null | false |
1,578,341,785 | 5,519 | Lint code with `ruff` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-09T17:50:21 | 2024-06-01T15:35:02 | 2023-02-14T16:18:38 | EDIT:
Use `ruff` for linting instead of `isort` and `flake8` ~~`black`~~ to be consistent with [`transformers`](https://github.com/huggingface/transformers/pull/21480) and [`hfh`](https://github.com/huggingface/huggingface_hub/pull/1323).
TODO:
- [x] ~Merge the community contributors' PR to avoid having to run `ma... | mariosasko | https://github.com/huggingface/datasets/pull/5519 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5519",
"html_url": "https://github.com/huggingface/datasets/pull/5519",
"diff_url": "https://github.com/huggingface/datasets/pull/5519.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5519.patch",
"merged_at": "2023-02-14T16:18... | true |
1,578,203,962 | 5,518 | Remove py.typed | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-09T16:22:29 | 2023-02-13T13:55:49 | 2023-02-13T13:48:40 | Fix https://github.com/huggingface/datasets/issues/3841 | mariosasko | https://github.com/huggingface/datasets/pull/5518 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5518",
"html_url": "https://github.com/huggingface/datasets/pull/5518",
"diff_url": "https://github.com/huggingface/datasets/pull/5518.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5518.patch",
"merged_at": "2023-02-13T13:48... | true |
1,577,976,608 | 5,517 | `with_format("numpy")` silently downcasts float64 to float32 features | open | [
"Hi! This behavior stems from these lines:\r\n\r\nhttps://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/np_formatter.py#L45-L46\r\n\r\nI agree we should preserve the original type whenever possible and downcast explicitly with a warning.\r\n\r\n@lhoestq Do you... | 2023-02-09T14:18:00 | 2024-01-18T08:42:17 | null | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | ernestum | https://github.com/huggingface/datasets/issues/5517 | null | false |
1,577,661,640 | 5,516 | Reload features from Parquet metadata | closed | [
"Thanks a lot for your help @lhoestq. I've simplified what turned out to be a simple fix and added the unit test.\r\n\r\nDoes this look ready to be merged or is there anything I'm still missing?",
"Cool ! I think you just need to remove the unused import in `io/parquet.py`\r\n```\r\nsrc/datasets/io/parquet.py:4:1... | 2023-02-09T10:52:15 | 2023-02-12T16:00:00 | 2023-02-12T15:57:01 | Resolves #5482.
Attaches feature metadata to parquet files serialised using `Dataset.to_parquet`.
This allows retrieving data with "rich" feature types (e.g., `datasets.features.image.Image` or `datasets.features.audio.Audio`) from parquet files without cumbersome casting (for an example, see #5482).
@lhoest... | MFreidank | https://github.com/huggingface/datasets/pull/5516 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5516",
"html_url": "https://github.com/huggingface/datasets/pull/5516",
"diff_url": "https://github.com/huggingface/datasets/pull/5516.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5516.patch",
"merged_at": "2023-02-12T15:57... | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.