id int64 599M 3.48B | number int64 1 7.8k | title stringlengths 1 290 | state stringclasses 2
values | comments listlengths 0 30 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-10-05 06:37:50 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-10-05 10:32:43 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-10-01 13:56:03 ⌀ | body stringlengths 0 228k ⌀ | user stringlengths 3 26 | html_url stringlengths 46 51 | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,524,250,269 | 5,412 | load_dataset() cannot find dataset_info.json with multiple training runs in parallel | closed | [
"Hi ! It fails because the dataset is already being prepared by your first run. I'd encourage you to prepare your dataset before using it for multiple trainings.\r\n\r\nYou can also specify another cache directory by passing `cache_dir=` to `load_dataset()`.",
"Thank you! What do you mean by prepare it beforehand... | 2023-01-08T00:44:32 | 2023-01-19T20:28:43 | 2023-01-19T20:28:43 | ### Describe the bug
I have a custom local dataset in JSON form. I am trying to do multiple training runs in parallel. The first training run runs with no issue. However, when I start another run on another GPU, the following code throws this error.
If there is a workaround to ignore the cache I think that would ... | mtoles | https://github.com/huggingface/datasets/issues/5412 | null | false |
1,523,297,786 | 5,411 | Update docs of S3 filesystem with async aiobotocore | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-01-06T23:19:17 | 2023-01-18T11:18:59 | 2023-01-18T11:12:04 | [s3fs has migrated to all async calls](https://github.com/fsspec/s3fs/commit/0de2c6fb3d87c08ea694de96dca0d0834034f8bf).
Updating documentation to use `AioSession` while using s3fs for download manager as well as working with datasets | maheshpec | https://github.com/huggingface/datasets/pull/5411 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5411",
"html_url": "https://github.com/huggingface/datasets/pull/5411",
"diff_url": "https://github.com/huggingface/datasets/pull/5411.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5411.patch",
"merged_at": "2023-01-18T11:12... | true |
1,521,168,032 | 5,410 | Map-style Dataset to IterableDataset | closed | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-01-05T18:12:17 | 2023-02-01T18:11:45 | 2023-02-01T16:36:01 | Added `ds.to_iterable()` to get an iterable dataset from a map-style arrow dataset.
It also has a `num_shards` argument to split the dataset before converting to an iterable dataset. Sharding is important to enable efficient shuffling and parallel loading of iterable datasets.
TODO:
- [x] tests
- [x] docs
Fi... | lhoestq | https://github.com/huggingface/datasets/pull/5410 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5410",
"html_url": "https://github.com/huggingface/datasets/pull/5410",
"diff_url": "https://github.com/huggingface/datasets/pull/5410.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5410.patch",
"merged_at": "2023-02-01T16:36... | true |
1,520,374,219 | 5,409 | Fix deprecation warning when use_auth_token passed to download_and_prepare | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-01-05T09:10:58 | 2023-01-06T11:06:16 | 2023-01-06T10:59:13 | The `DatasetBuilder.download_and_prepare` argument `use_auth_token` was deprecated in:
- #5302
However, `use_auth_token` is still passed to `download_and_prepare` in our built-in `io` readers (csv, json, parquet,...).
This PR fixes it, so that no deprecation warning is raised.
Fix #5407. | albertvillanova | https://github.com/huggingface/datasets/pull/5409 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5409",
"html_url": "https://github.com/huggingface/datasets/pull/5409",
"diff_url": "https://github.com/huggingface/datasets/pull/5409.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5409.patch",
"merged_at": "2023-01-06T10:59... | true |
1,519,890,752 | 5,408 | dataset map function could not be hash properly | closed | [
"Hi ! On macos I tried with\r\n- py 3.9.11\r\n- datasets 2.8.0\r\n- transformers 4.25.1\r\n- dill 0.3.4\r\n\r\nand I was able to hash `prepare_dataset` correctly:\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\nHasher.hash(prepare_dataset)\r\n```\r\n\r\nWhat version of transformers do you have ? Can you ... | 2023-01-05T01:59:59 | 2023-01-06T13:22:19 | 2023-01-06T13:22:18 | ### Describe the bug
I follow the [blog post](https://huggingface.co/blog/fine-tune-whisper#building-a-demo) to finetune a Cantonese transcribe model.
When using map function to prepare dataset, following warning pop out:
`common_voice = common_voice.map(prepare_dataset,
remove_... | Tungway1990 | https://github.com/huggingface/datasets/issues/5408 | null | false |
1,519,797,345 | 5,407 | Datasets.from_sql() generates deprecation warning | closed | [
"Thanks for reporting @msummerfield. We are fixing it."
] | 2023-01-05T00:43:17 | 2023-01-06T10:59:14 | 2023-01-06T10:59:14 | ### Describe the bug
Calling `Datasets.from_sql()` generates a warning:
`.../site-packages/datasets/builder.py:712: FutureWarning: 'use_auth_token' was deprecated in version 2.7.1 and will be removed in 3.0.0. Pass 'use_auth_token' to the initializer/'load_dataset_builder' instead.`
### Steps to reproduce the ... | msummerfield | https://github.com/huggingface/datasets/issues/5407 | null | false |
1,519,140,544 | 5,406 | [2.6.1][2.7.0] Upgrade `datasets` to fix `TypeError: can only concatenate str (not "int") to str` | open | [
"I still get this error on 2.9.0\r\n<img width=\"1925\" alt=\"image\" src=\"https://user-images.githubusercontent.com/7208470/215597359-2f253c76-c472-4612-8099-d3a74d16eb29.png\">\r\n",
"Hi ! I just tested locally and or colab and it works fine for 2.9 on `sst2`.\r\n\r\nAlso the code that is shown in your stack t... | 2023-01-04T15:10:04 | 2023-06-21T18:45:38 | null | `datasets` 2.6.1 and 2.7.0 started to stop supporting datasets like IMDB, ConLL or MNIST datasets.
When loading a dataset using 2.6.1 or 2.7.0, you may this error when loading certain datasets:
```python
TypeError: can only concatenate str (not "int") to str
```
This is because we started to update the metadat... | lhoestq | https://github.com/huggingface/datasets/issues/5406 | null | false |
1,517,879,386 | 5,405 | size_in_bytes the same for all splits | open | [
"Hi @Breakend,\r\n\r\nIndeed, the attribute `size_in_bytes` refers to the size of the entire dataset configuration, for all splits (size of downloaded files + Arrow files), not the specific split.\r\nThis is also the case for `download_size` (downloaded files) and `dataset_size` (Arrow files).\r\n\r\nThe size of th... | 2023-01-03T20:25:48 | 2023-01-04T09:22:59 | null | ### Describe the bug
Hi, it looks like whenever you pull a dataset and get size_in_bytes, it returns the same size for all splits (and that size is the combined size of all splits). It seems like this shouldn't be the intended behavior since it is misleading. Here's an example:
```
>>> from datasets import load_da... | Breakend | https://github.com/huggingface/datasets/issues/5405 | null | false |
1,517,566,331 | 5,404 | Better integration of BIG-bench | open | [
"Hi, I made my version : https://huggingface.co/datasets/tasksource/bigbench"
] | 2023-01-03T15:37:57 | 2023-02-09T20:30:26 | null | ### Feature request
Ideally, it would be nice to have a maintained PyPI package for `bigbench`.
### Motivation
We'd like to allow anyone to access, explore and use any task.
### Your contribution
@lhoestq has opened an issue in their repo:
- https://github.com/google/BIG-bench/issues/906 | albertvillanova | https://github.com/huggingface/datasets/issues/5404 | null | false |
1,517,466,492 | 5,403 | Replace one letter import in docs | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for the docs fix for consistency.\r\n> \r\n> Again for consistency, it would be nice to make the same fix across all the docs, e.g.\r\n> \r\n> https://github.com/huggingface/datasets/blob/310cdddd1c43f9658de172b85b6509d07d5e... | 2023-01-03T14:26:32 | 2023-01-03T15:06:18 | 2023-01-03T14:59:01 | This PR updates a code example for consistency across the docs based on [feedback from this comment](https://github.com/huggingface/transformers/pull/20925/files/9fda31634d203a47d3212e4e8d43d3267faf9808#r1058769500):
"In terms of style we usually stay away from one-letter imports like this (even if the community use... | MKhalusova | https://github.com/huggingface/datasets/pull/5403 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5403",
"html_url": "https://github.com/huggingface/datasets/pull/5403",
"diff_url": "https://github.com/huggingface/datasets/pull/5403.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5403.patch",
"merged_at": "2023-01-03T14:59... | true |
1,517,409,429 | 5,402 | Missing state.json when creating a cloud dataset using a dataset_builder | open | [
"`load_from_disk` must be used on datasets saved using `save_to_disk`: they correspond to fully serialized datasets including their state.\r\n\r\nOn the other hand, `download_and_prepare` just downloads the raw data and convert them to arrow (or parquet if you want). We are working on allowing you to reload a datas... | 2023-01-03T13:39:59 | 2023-01-04T17:23:57 | null | ### Describe the bug
Using `load_dataset_builder` to create a builder, run `download_and_prepare` do upload it to S3. However when trying to load it, there are missing `state.json` files. Complete example:
```python
from aiobotocore.session import AioSession as Session
from datasets import load_from_disk, load_da... | danielfleischer | https://github.com/huggingface/datasets/issues/5402 | null | false |
1,517,160,935 | 5,401 | Support Dataset conversion from/to Spark | open | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5401). All of your documentation changes will be reflected on that endpoint.",
"Cool thanks !\r\n\r\nSpark DataFrame are usually quite big, and I believe here `from_spark` would load everything in the driver node's RAM, which i... | 2023-01-03T09:57:40 | 2023-01-05T14:21:33 | null | This PR implements Spark integration by supporting `Dataset` conversion from/to Spark `DataFrame`. | albertvillanova | https://github.com/huggingface/datasets/pull/5401 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5401",
"html_url": "https://github.com/huggingface/datasets/pull/5401",
"diff_url": "https://github.com/huggingface/datasets/pull/5401.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5401.patch",
"merged_at": null
} | true |
1,517,032,972 | 5,400 | Support streaming datasets with os.path.exists and Path.exists | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-01-03T07:42:37 | 2023-01-06T10:42:44 | 2023-01-06T10:35:44 | Support streaming datasets with `os.path.exists` and `pathlib.Path.exists`. | albertvillanova | https://github.com/huggingface/datasets/pull/5400 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5400",
"html_url": "https://github.com/huggingface/datasets/pull/5400",
"diff_url": "https://github.com/huggingface/datasets/pull/5400.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5400.patch",
"merged_at": "2023-01-06T10:35... | true |
1,515,548,427 | 5,399 | Got disconnected from remote data host. Retrying in 5sec [2/20] | closed | [] | 2023-01-01T13:00:11 | 2023-01-02T07:21:52 | 2023-01-02T07:21:52 | ### Describe the bug
While trying to upload my image dataset of a CSV file type to huggingface by running the below code. The dataset consists of a little over 100k of image-caption pairs
### Steps to reproduce the bug
```
df = pd.read_csv('x.csv', encoding='utf-8-sig')
features = Features({
'link': Ima... | alhuri | https://github.com/huggingface/datasets/issues/5399 | null | false |
1,514,425,231 | 5,398 | Unpin pydantic | closed | [] | 2022-12-30T10:37:31 | 2022-12-30T10:43:41 | 2022-12-30T10:43:41 | Once `pydantic` fixes their issue in their 1.10.3 version, unpin it.
See issue:
- #5394
See temporary fix:
- #5395 | albertvillanova | https://github.com/huggingface/datasets/issues/5398 | null | false |
1,514,412,246 | 5,397 | Unpin pydantic test dependency | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2022-12-30T10:22:09 | 2022-12-30T10:53:11 | 2022-12-30T10:43:40 | Once pydantic-1.10.3 has been yanked, we can unpin it: https://pypi.org/project/pydantic/1.10.3/
See reply by pydantic team https://github.com/pydantic/pydantic/issues/4885#issuecomment-1367819807
```
v1.10.3 has been yanked.
```
in response to spacy request: https://github.com/pydantic/pydantic/issues/4885#issu... | albertvillanova | https://github.com/huggingface/datasets/pull/5397 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5397",
"html_url": "https://github.com/huggingface/datasets/pull/5397",
"diff_url": "https://github.com/huggingface/datasets/pull/5397.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5397.patch",
"merged_at": "2022-12-30T10:43... | true |
1,514,002,934 | 5,396 | Fix checksum verification | closed | [
"Hi ! If I'm not mistaken both `expected_checksums[url]` and `recorded_checksums[url]` are dictionaries with keys \"checksum\" and \"num_bytes\". So we need to check whether `expected_checksums[url] != recorded_checksums[url]` (or simply `expected_checksums[url][\"checksum\"] != recorded_checksums[url][\"checksum\"... | 2022-12-29T19:45:17 | 2023-02-13T11:11:22 | 2023-02-13T11:11:22 | Expected checksum was verified against checksum dict (not checksum). | daskol | https://github.com/huggingface/datasets/pull/5396 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5396",
"html_url": "https://github.com/huggingface/datasets/pull/5396",
"diff_url": "https://github.com/huggingface/datasets/pull/5396.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5396.patch",
"merged_at": null
} | true |
1,513,997,335 | 5,395 | Temporarily pin pydantic test dependency | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2022-12-29T19:34:19 | 2022-12-30T06:36:57 | 2022-12-29T21:00:26 | Temporarily pin `pydantic` until a permanent solution is found.
Fix #5394. | albertvillanova | https://github.com/huggingface/datasets/pull/5395 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5395",
"html_url": "https://github.com/huggingface/datasets/pull/5395",
"diff_url": "https://github.com/huggingface/datasets/pull/5395.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5395.patch",
"merged_at": "2022-12-29T21:00... | true |
1,513,976,229 | 5,394 | CI error: TypeError: dataclass_transform() got an unexpected keyword argument 'field_specifiers' | closed | [
"I still getting the same error :\r\n\r\n`python -m spacy download fr_core_news_lg\r\n`.\r\n`import spacy`",
"@MFatnassi, this issue and the corresponding fix only affect our Continuous Integration testing environment.\r\n\r\nNote that `datasets` does not depend on `spacy`."
] | 2022-12-29T18:58:44 | 2022-12-30T10:40:51 | 2022-12-29T21:00:27 | ### Describe the bug
While installing the dependencies, the CI raises a TypeError:
```
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/runpy.py", line 183, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/opt/hoste... | albertvillanova | https://github.com/huggingface/datasets/issues/5394 | null | false |
1,512,908,613 | 5,393 | Finish deprecating the fs argument | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for the deprecation. Some minor suggested fixes below...\r\n> \r\n> Also note that the corresponding tests should be updated as well.\r\n\r\nThanks for the suggestions/typo fixes. I updated the failing test - passing locall... | 2022-12-28T15:33:17 | 2023-01-18T12:42:33 | 2023-01-18T12:35:32 | See #5385 for some discussion on this
The `fs=` arg was depcrecated from `Dataset.save_to_disk` and `Dataset.load_from_disk` in `2.8.0` (to be removed in `3.0.0`). There are a few other places where the `fs=` arg was still used (functions/methods in `datasets.info` and `datasets.load`). This PR adds a similar beha... | dconathan | https://github.com/huggingface/datasets/pull/5393 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5393",
"html_url": "https://github.com/huggingface/datasets/pull/5393",
"diff_url": "https://github.com/huggingface/datasets/pull/5393.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5393.patch",
"merged_at": "2023-01-18T12:35... | true |
1,512,712,529 | 5,392 | Fix Colab notebook link | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2022-12-28T11:44:53 | 2023-01-03T15:36:14 | 2023-01-03T15:27:31 | Fix notebook link to open in Colab. | albertvillanova | https://github.com/huggingface/datasets/pull/5392 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5392",
"html_url": "https://github.com/huggingface/datasets/pull/5392",
"diff_url": "https://github.com/huggingface/datasets/pull/5392.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5392.patch",
"merged_at": "2023-01-03T15:27... | true |
1,510,350,400 | 5,391 | Whisper Event - RuntimeError: The size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1 100% 1000/1000 [2:52:21<00:00, 10.34s/it] | closed | [
"Hey @catswithbats! Super sorry for the late reply! This is happening because there is data with label length (504) that exceeds the model's max length (448). \r\n\r\nThere are two options here:\r\n1. Increase the model's `max_length` parameter: \r\n```python\r\nmodel.config.max_length = 512\r\n```\r\n2. Filter dat... | 2022-12-25T15:17:14 | 2023-07-21T14:29:47 | 2023-07-21T14:29:47 | Done in a VM with a GPU (Ubuntu) following the [Whisper Event - PYTHON](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#python-script) instructions.
Attempted using [RuntimeError: he size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1 100% 1... | catswithbats | https://github.com/huggingface/datasets/issues/5391 | null | false |
1,509,357,553 | 5,390 | Error when pushing to the CI hub | closed | [
"Hmmm, git bisect tells me that the behavior is the same since https://github.com/huggingface/datasets/commit/67e65c90e9490810b89ee140da11fdd13c356c9c (3 Oct), i.e. https://github.com/huggingface/datasets/pull/4926",
"Maybe related to the discussions in https://github.com/huggingface/datasets/pull/5196",
"Maybe... | 2022-12-23T13:36:37 | 2022-12-23T20:29:02 | 2022-12-23T20:29:02 | ### Describe the bug
Note that it's a special case where the Hub URL is "https://hub-ci.huggingface.co", which does not appear if we do the same on the Hub (https://huggingface.co).
The call to `dataset.push_to_hub(` fails:
```
Pushing dataset shards to the dataset hub: 100%|██████████████████████████████████... | severo | https://github.com/huggingface/datasets/issues/5390 | null | false |
1,509,348,626 | 5,389 | Fix link in `load_dataset` docstring | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2022-12-23T13:26:31 | 2023-01-25T19:00:43 | 2023-01-24T16:33:38 | Fix https://github.com/huggingface/datasets/issues/5387, fix https://github.com/huggingface/datasets/issues/4566 | mariosasko | https://github.com/huggingface/datasets/pull/5389 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5389",
"html_url": "https://github.com/huggingface/datasets/pull/5389",
"diff_url": "https://github.com/huggingface/datasets/pull/5389.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5389.patch",
"merged_at": "2023-01-24T16:33... | true |
1,509,042,348 | 5,388 | Getting Value Error while loading a dataset.. | closed | [
"Hi! I can't reproduce this error locally (Mac) or in Colab. What version of `datasets` are you using?",
"Hi [mariosasko](https://github.com/mariosasko), the datasets version is '2.8.0'.",
"@valmetisrinivas you get that error because you imported `datasets` (and thus `fsspec`) before installing `zstandard`.\r\n... | 2022-12-23T08:16:43 | 2022-12-29T08:36:33 | 2022-12-27T17:59:09 | ### Describe the bug
I am trying to load a dataset using Hugging Face Datasets load_dataset method. I am getting the value error as show below. Can someone help with this? I am using Windows laptop and Google Colab notebook.
```
WARNING:datasets.builder:Using custom data configuration default-a1d9e8eaedd958cd
---... | valmetisrinivas | https://github.com/huggingface/datasets/issues/5388 | null | false |
1,508,740,177 | 5,387 | Missing documentation page : improve-performance | closed | [
"Hi! Our documentation builder does not support links to sections, hence the bug. This is the link it should point to https://huggingface.co/docs/datasets/v2.8.0/en/cache#improve-performance."
] | 2022-12-23T01:12:57 | 2023-01-24T16:33:40 | 2023-01-24T16:33:40 | ### Describe the bug
Trying to access https://huggingface.co/docs/datasets/v2.8.0/en/package_reference/cache#improve-performance, the page is missing.
The link is in here : https://huggingface.co/docs/datasets/v2.8.0/en/package_reference/loading_methods#datasets.load_dataset.keep_in_memory
### Steps to reproduce t... | astariul | https://github.com/huggingface/datasets/issues/5387 | null | false |
1,508,592,918 | 5,386 | `max_shard_size` in `datasets.push_to_hub()` breaks with large files | closed | [
"Hi! \r\n\r\nThis behavior stems from the fact that we don't always embed image bytes in the underlying arrow table, which can lead to bad size estimation (we use the first 1000 table rows to [estimate](https://github.com/huggingface/datasets/blob/9a7272cd4222383a5b932b0083a4cc173fda44e8/src/datasets/arrow_dataset.... | 2022-12-22T21:50:58 | 2022-12-26T23:45:51 | 2022-12-26T23:45:51 | ### Describe the bug
`max_shard_size` parameter for `datasets.push_to_hub()` works unreliably with large files, generating shard files that are way past the specified limit.
In my private dataset, which contains unprocessed images of all sizes (up to `~100MB` per file), I've encountered cases where `max_shard_siz... | salieri | https://github.com/huggingface/datasets/issues/5386 | null | false |
1,508,535,532 | 5,385 | Is `fs=` deprecated in `load_from_disk()` as well? | closed | [
"Hi! Yes, we should deprecate the `fs` param here. Would you be interested in submitting a PR? ",
"> Hi! Yes, we should deprecate the `fs` param here. Would you be interested in submitting a PR?\r\n\r\nYeah I can do that sometime next week. Should the storage_options be a new arg here? I’ll look around for anywh... | 2022-12-22T21:00:45 | 2023-01-23T10:50:05 | 2023-01-23T10:50:04 | ### Describe the bug
The `fs=` argument was deprecated from `Dataset.save_to_disk` and `Dataset.load_from_disk` in favor of automagically figuring it out via fsspec:
https://github.com/huggingface/datasets/blob/9a7272cd4222383a5b932b0083a4cc173fda44e8/src/datasets/arrow_dataset.py#L1339-L1340
Is there a reason the... | dconathan | https://github.com/huggingface/datasets/issues/5385 | null | false |
1,508,152,598 | 5,384 | Handle 0-dim tensors in `cast_to_python_objects` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2022-12-22T16:15:30 | 2023-01-13T16:10:15 | 2023-01-13T16:00:52 | Fix #5229 | mariosasko | https://github.com/huggingface/datasets/pull/5384 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5384",
"html_url": "https://github.com/huggingface/datasets/pull/5384",
"diff_url": "https://github.com/huggingface/datasets/pull/5384.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5384.patch",
"merged_at": "2023-01-13T16:00... | true |
1,507,293,968 | 5,383 | IterableDataset missing column_names, differs from Dataset interface | closed | [
"Another example is that `IterableDataset.map` does not have `fn_kwargs`, among other arguments. It makes it harder to convert code from Dataset to IterableDataset.",
"Hi! `fn_kwargs` was added to `IterableDataset.map` in `datasets 2.5.0`, so please update your installation (`pip install -U datasets`) to use it.\... | 2022-12-22T05:27:02 | 2023-03-13T19:03:33 | 2023-03-13T19:03:33 | ### Describe the bug
The documentation on [Stream](https://huggingface.co/docs/datasets/v1.18.2/stream.html) seems to imply that IterableDataset behaves just like a Dataset. However, examples like
```
dataset.map(augment_data, batched=True, remove_columns=dataset.column_names, ...)
```
will not work because `.colu... | iceboundflame | https://github.com/huggingface/datasets/issues/5383 | null | false |
1,504,788,691 | 5,382 | Raise from disconnect error in xopen | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Could you review this small PR @albertvillanova ? :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric... | 2022-12-20T15:52:44 | 2023-01-26T09:51:13 | 2023-01-26T09:42:45 | this way we can know the cause of the disconnect
related to https://github.com/huggingface/datasets/issues/5374 | lhoestq | https://github.com/huggingface/datasets/pull/5382 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5382",
"html_url": "https://github.com/huggingface/datasets/pull/5382",
"diff_url": "https://github.com/huggingface/datasets/pull/5382.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5382.patch",
"merged_at": "2023-01-26T09:42... | true |
1,504,498,387 | 5,381 | Wrong URL for the_pile dataset | closed | [
"Hi! This error can happen if there is a local file/folder with the same name as the requested dataset. And to avoid it, rename the local file/folder.\r\n\r\nSoon, it will be possible to explicitly request a Hub dataset as follows:https://github.com/huggingface/datasets/issues/5228#issuecomment-1313494020"
] | 2022-12-20T12:40:14 | 2023-02-15T16:24:57 | 2023-02-15T16:24:57 | ### Describe the bug
When trying to load `the_pile` dataset from the library, I get a `FileNotFound` error.
### Steps to reproduce the bug
Steps to reproduce:
Run:
```
from datasets import load_dataset
dataset = load_dataset("the_pile")
```
I get the output:
"name": "FileNotFoundError",
"message... | LeoGrin | https://github.com/huggingface/datasets/issues/5381 | null | false |
1,504,404,043 | 5,380 | Improve dataset `.skip()` speed in streaming mode | open | [
"Hi! I agree `skip` can be inefficient to use in the current state.\r\n\r\nTo make it fast, we could use \"statistics\" stored in Parquet metadata and read only the chunks needed to form a dataset. \r\n\r\nAnd thanks to the \"datasets-server\" project, which aims to store the Parquet versions of the Hub datasets (o... | 2022-12-20T11:25:23 | 2023-03-08T10:47:12 | null | ### Feature request
Add extra information to the `dataset_infos.json` file to include the number of samples/examples in each shard, for example in a new field `num_examples` alongside `num_bytes`. The `.skip()` function could use this information to ignore the download of a shard when in streaming mode, which AFAICT... | versae | https://github.com/huggingface/datasets/issues/5380 | null | false |
1,504,010,639 | 5,379 | feat: depth estimation dataset guide. | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the changes, looks good to me!",
"@stevhliu I have pushed some quality improvements both in terms of code and content. Would you be able to re-review? ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0... | 2022-12-20T05:32:11 | 2023-01-13T12:30:31 | 2023-01-13T12:23:34 | This PR adds a guide for prepping datasets for depth estimation.
PR to add documentation images is up here: https://huggingface.co/datasets/huggingface/documentation-images/discussions/22 | sayakpaul | https://github.com/huggingface/datasets/pull/5379 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5379",
"html_url": "https://github.com/huggingface/datasets/pull/5379",
"diff_url": "https://github.com/huggingface/datasets/pull/5379.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5379.patch",
"merged_at": "2023-01-13T12:23... | true |
1,503,887,508 | 5,378 | The dataset "the_pile", subset "enron_emails" , load_dataset() failure | closed | [
"Thanks for reporting @shaoyuta. We are investigating it.\r\n\r\nWe are transferring the issue to \"the_pile\" Community tab on the Hub: https://huggingface.co/datasets/the_pile/discussions/4"
] | 2022-12-20T02:19:13 | 2022-12-20T07:52:54 | 2022-12-20T07:52:54 | ### Describe the bug
When run
"datasets.load_dataset("the_pile","enron_emails")" failure

### Steps to reproduce the bug
Run below code in python cli:
>>> import datasets
>>> datasets.load_dataset(... | shaoyuta | https://github.com/huggingface/datasets/issues/5378 | null | false |
1,503,477,833 | 5,377 | Add a parallel implementation of to_tf_dataset() | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Failing because the test server uses Py3.7 but the `SharedMemory` features require Py3.8! I forgot we still support 3.7 for another couple of months. I'm not sure exactly how to proceed, whether I should leave this PR until then, or ... | 2022-12-19T19:40:27 | 2023-01-25T16:28:44 | 2023-01-25T16:21:40 | Hey all! Here's a first draft of the PR to add a multiprocessing implementation for `to_tf_dataset()`. It worked in some quick testing for me, but obviously I need to do some much more rigorous testing/benchmarking, and add some proper library tests.
The core idea is that we do everything using `multiprocessing` and... | Rocketknight1 | https://github.com/huggingface/datasets/pull/5377 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5377",
"html_url": "https://github.com/huggingface/datasets/pull/5377",
"diff_url": "https://github.com/huggingface/datasets/pull/5377.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5377.patch",
"merged_at": "2023-01-25T16:21... | true |
1,502,730,559 | 5,376 | set dev version | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5376). All of your documentation changes will be reflected on that endpoint."
] | 2022-12-19T10:56:56 | 2022-12-19T11:01:55 | 2022-12-19T10:57:16 | null | lhoestq | https://github.com/huggingface/datasets/pull/5376 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5376",
"html_url": "https://github.com/huggingface/datasets/pull/5376",
"diff_url": "https://github.com/huggingface/datasets/pull/5376.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5376.patch",
"merged_at": "2022-12-19T10:57... | true |
1,502,720,404 | 5,375 | Release: 2.8.0 | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-19T10:48:26 | 2022-12-19T10:55:43 | 2022-12-19T10:53:15 | null | lhoestq | https://github.com/huggingface/datasets/pull/5375 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5375",
"html_url": "https://github.com/huggingface/datasets/pull/5375",
"diff_url": "https://github.com/huggingface/datasets/pull/5375.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5375.patch",
"merged_at": "2022-12-19T10:53... | true |
1,501,872,945 | 5,374 | Using too many threads results in: Got disconnected from remote data host. Retrying in 5sec | closed | [
"The data files are hosted on HF at https://huggingface.co/datasets/allenai/c4/tree/main\r\n\r\nYou have 200 runs streaming the same files in parallel. So this is probably a Hub limitation. Maybe rate limiting ? cc @julien-c \r\n\r\nMaybe you can also try to reduce the number of HTTP requests by increasing the bloc... | 2022-12-18T11:38:58 | 2023-07-24T15:23:07 | 2023-07-24T15:23:07 | ### Describe the bug
`streaming_download_manager` seems to disconnect if too many runs access the same underlying dataset 🧐
The code works fine for me if I have ~100 runs in parallel, but disconnects once scaling to 200.
Possibly related:
- https://github.com/huggingface/datasets/pull/3100
- https://github.com/... | Muennighoff | https://github.com/huggingface/datasets/issues/5374 | null | false |
1,501,484,197 | 5,373 | Simplify skipping | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-17T17:23:52 | 2022-12-18T21:43:31 | 2022-12-18T21:40:21 | Was hoping to find a way to speed up the skipping as I'm running into bottlenecks skipping 100M examples on C4 (it takes 12 hours to skip), but didn't find anything better than this small change :(
Maybe there's a way to directly skip whole shards to speed it up? 🧐 | Muennighoff | https://github.com/huggingface/datasets/pull/5373 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5373",
"html_url": "https://github.com/huggingface/datasets/pull/5373",
"diff_url": "https://github.com/huggingface/datasets/pull/5373.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5373.patch",
"merged_at": "2022-12-18T21:40... | true |
1,501,377,802 | 5,372 | Fix streaming pandas.read_excel | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2022-12-17T12:58:52 | 2023-01-06T11:50:58 | 2023-01-06T11:43:37 | This PR fixes `xpandas_read_excel`:
- Support passing a path string, besides a file-like object
- Support passing `use_auth_token`
- First assumes the host server supports HTTP range requests; only if a ValueError is thrown (Cannot seek streaming HTTP file), then it preserves previous behavior (see [#3355](https://g... | albertvillanova | https://github.com/huggingface/datasets/pull/5372 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5372",
"html_url": "https://github.com/huggingface/datasets/pull/5372",
"diff_url": "https://github.com/huggingface/datasets/pull/5372.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5372.patch",
"merged_at": "2023-01-06T11:43... | true |
1,501,369,036 | 5,371 | Add a robustness benchmark dataset for vision | open | [
"Ccing @nazneenrajani @lvwerra @osanseviero "
] | 2022-12-17T12:35:13 | 2022-12-20T06:21:41 | null | ### Name
ImageNet-C
### Paper
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
### Data
https://github.com/hendrycks/robustness
### Motivation
It's a known fact that vision models are brittle when they meet with slightly corrupted and perturbed data. This is also corre... | sayakpaul | https://github.com/huggingface/datasets/issues/5371 | null | false |
1,500,622,276 | 5,369 | Distributed support | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Alright all the tests are passing - this is ready for review",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n... | 2022-12-16T17:43:47 | 2023-07-25T12:00:31 | 2023-01-16T13:33:32 | To split your dataset across your training nodes, you can use the new [`datasets.distributed.split_dataset_by_node`]:
```python
import os
from datasets.distributed import split_dataset_by_node
ds = split_dataset_by_node(ds, rank=int(os.environ["RANK"]), world_size=int(os.environ["WORLD_SIZE"]))
```
This wor... | lhoestq | https://github.com/huggingface/datasets/pull/5369 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5369",
"html_url": "https://github.com/huggingface/datasets/pull/5369",
"diff_url": "https://github.com/huggingface/datasets/pull/5369.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5369.patch",
"merged_at": "2023-01-16T13:33... | true |
1,500,322,973 | 5,368 | Align remove columns behavior and input dict mutation in `map` with previous behavior | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-16T14:28:47 | 2022-12-16T16:28:08 | 2022-12-16T16:25:12 | Align the `remove_columns` behavior and input dict mutation in `map` with the behavior before https://github.com/huggingface/datasets/pull/5252. | mariosasko | https://github.com/huggingface/datasets/pull/5368 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5368",
"html_url": "https://github.com/huggingface/datasets/pull/5368",
"diff_url": "https://github.com/huggingface/datasets/pull/5368.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5368.patch",
"merged_at": "2022-12-16T16:25... | true |
1,499,174,749 | 5,367 | Fix remove columns from lazy dict | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-15T22:04:12 | 2022-12-15T22:27:53 | 2022-12-15T22:24:50 | This was introduced in https://github.com/huggingface/datasets/pull/5252 and causing the transformers CI to break: https://app.circleci.com/pipelines/github/huggingface/transformers/53886/workflows/522faf2e-a053-454c-94f8-a617fde33393/jobs/648597
Basically this code should return a dataset with only one column:
`... | lhoestq | https://github.com/huggingface/datasets/pull/5367 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5367",
"html_url": "https://github.com/huggingface/datasets/pull/5367",
"diff_url": "https://github.com/huggingface/datasets/pull/5367.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5367.patch",
"merged_at": "2022-12-15T22:24... | true |
1,498,530,851 | 5,366 | ExamplesIterable fixes | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-15T14:23:05 | 2022-12-15T14:44:47 | 2022-12-15T14:41:45 | fix typing and ExamplesIterable.shard_data_sources | lhoestq | https://github.com/huggingface/datasets/pull/5366 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5366",
"html_url": "https://github.com/huggingface/datasets/pull/5366",
"diff_url": "https://github.com/huggingface/datasets/pull/5366.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5366.patch",
"merged_at": "2022-12-15T14:41... | true |
1,498,422,466 | 5,365 | fix: image array should support other formats than uint8 | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, thanks for working on this! \r\n\r\nI agree that the current type-casting (always cast to `np.uint8` as Tensorflow Datasets does) is a bit too harsh. However, not all dtypes are supported in `Image.fromarray` (e.g. np.int64), so ... | 2022-12-15T13:17:50 | 2023-01-26T18:46:45 | 2023-01-26T18:39:36 | Currently images that are provided as ndarrays, but not in `uint8` format are going to loose data. Namely, for example in a depth image where the data is in float32 format, the type-casting to uint8 will basically make the whole image blank.
`PIL.Image.fromarray` [does support mode `F`](https://pillow.readthedocs.io/e... | vigsterkr | https://github.com/huggingface/datasets/pull/5365 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5365",
"html_url": "https://github.com/huggingface/datasets/pull/5365",
"diff_url": "https://github.com/huggingface/datasets/pull/5365.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5365.patch",
"merged_at": "2023-01-26T18:39... | true |
1,498,360,628 | 5,364 | Support for writing arrow files directly with BeamWriter | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5364). All of your documentation changes will be reflected on that endpoint.",
"Deleting `BeamPipeline` and `upload_local_to_remote` would break the existing Beam scripts, so I reverted this change.\r\n\r\nFrom what I understan... | 2022-12-15T12:38:05 | 2024-01-11T14:52:33 | 2024-01-11T14:45:15 | Make it possible to write Arrow files directly with `BeamWriter` rather than converting from Parquet to Arrow, which is sub-optimal, especially for big datasets for which Beam is primarily used. | mariosasko | https://github.com/huggingface/datasets/pull/5364 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5364",
"html_url": "https://github.com/huggingface/datasets/pull/5364",
"diff_url": "https://github.com/huggingface/datasets/pull/5364.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5364.patch",
"merged_at": null
} | true |
1,498,171,317 | 5,363 | Dataset.from_generator() crashes on simple example | closed | [] | 2022-12-15T10:21:28 | 2022-12-15T11:51:33 | 2022-12-15T11:51:33 | null | villmow | https://github.com/huggingface/datasets/issues/5363 | null | false |
1,497,643,744 | 5,362 | Run 'GPT-J' failure due to download dataset fail (' ConnectionError: Couldn't reach http://eaidata.bmk.sh/data/enron_emails.jsonl.zst ' ) | closed | [
"Thanks for reporting, @shaoyuta.\r\n\r\nWe have checked and yes, apparently there is an issue with the server hosting the data of the \"enron_emails\" subset of \"the_pile\" dataset: http://eaidata.bmk.sh/data/enron_emails.jsonl.zst\r\nIt seems to be down: The connection has timed out.\r\n\r\nPlease note that at t... | 2022-12-15T01:23:03 | 2022-12-15T07:45:54 | 2022-12-15T07:45:53 | ### Describe the bug
Run model "GPT-J" with dataset "the_pile" fail.
The fail out is as below:

Looks like which is due to "http://eaidata.bmk.sh/data/enron_emails.jsonl.zst" unreachable .
### Steps to ... | shaoyuta | https://github.com/huggingface/datasets/issues/5362 | null | false |
1,497,153,889 | 5,361 | How concatenate `Audio` elements using batch mapping | closed | [
"You can try something like this ?\r\n```python\r\ndef mapper_function(batch):\r\n return {\"concatenated_audio\": [np.concatenate([audio[\"array\"] for audio in batch[\"audio\"]])]}\r\n\r\ndataset = dataset.map(\r\n mapper_function,\r\n batched=True,\r\n batch_size=3,\r\n remove_columns=list(dataset.... | 2022-12-14T18:13:55 | 2023-07-21T14:30:51 | 2023-07-21T14:30:51 | ### Describe the bug
I am trying to do concatenate audios in a dataset e.g. `google/fleurs`.
```python
print(dataset)
# Dataset({
# features: ['path', 'audio'],
# num_rows: 24
# })
def mapper_function(batch):
# to merge every 3 audio
# np.concatnate(audios[i: i+3]) for i in range(i, len(batc... | bayartsogt-ya | https://github.com/huggingface/datasets/issues/5361 | null | false |
1,496,947,177 | 5,360 | IterableDataset returns duplicated data using PyTorch DDP | closed | [
"If you use huggingface trainer, you will find the trainer has wrapped a `IterableDatasetShard` to avoid duplication.\r\nSee:\r\nhttps://github.com/huggingface/transformers/blob/dfd818420dcbad68e05a502495cf666d338b2bfb/src/transformers/trainer.py#L835\r\n",
"If you want to support it by datasets natively, maybe w... | 2022-12-14T16:06:19 | 2023-06-15T09:51:13 | 2023-01-16T13:33:33 | As mentioned in https://github.com/huggingface/datasets/issues/3423, when using PyTorch DDP the dataset ends up with duplicated data. We already check for the PyTorch `worker_info` for single node, but we should also check for `torch.distributed.get_world_size()` and `torch.distributed.get_rank()` | lhoestq | https://github.com/huggingface/datasets/issues/5360 | null | false |
1,495,297,857 | 5,359 | Raise error if ClassLabel names is not python list | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for your proposed fix, @freddyheppell.\r\n\r\nCurrently the CI fails because in a test we pass a `tuple` instead of a `list`. I would say we should accept `tuple` as a valid input type as well...\r\n\r\nWhat about checking for... | 2022-12-13T23:04:06 | 2022-12-22T16:35:49 | 2022-12-22T16:32:49 | Checks type of names provided to ClassLabel to avoid easy and hard to debug errors (closes #5332 - see for discussion) | freddyheppell | https://github.com/huggingface/datasets/pull/5359 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5359",
"html_url": "https://github.com/huggingface/datasets/pull/5359",
"diff_url": "https://github.com/huggingface/datasets/pull/5359.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5359.patch",
"merged_at": "2022-12-22T16:32... | true |
1,495,270,822 | 5,358 | Fix `fs.open` resource leaks | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@mariosasko Sorry, I didn't check tests/style after doing a merge from the Git UI last week. Thx for fixing. \r\n\r\nFYI I'm getting \"Only those with [write access](https://docs.github.com/articles/what-are-the-different-access-perm... | 2022-12-13T22:35:51 | 2023-01-05T16:46:31 | 2023-01-05T15:59:51 | Invoking `{load,save}_from_dict` results in resource leak warnings, this should fix.
Introduces no significant logic changes. | tkukurin | https://github.com/huggingface/datasets/pull/5358 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5358",
"html_url": "https://github.com/huggingface/datasets/pull/5358",
"diff_url": "https://github.com/huggingface/datasets/pull/5358.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5358.patch",
"merged_at": "2023-01-05T15:59... | true |
1,495,029,602 | 5,357 | Support torch dataloader without torch formatting | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Need some more time to fix the tests, especially with pickle",
"> And I actually don't quite understand the idea - what's the motivation behind making only IterableDataset compatible with torch DataLoader without setting the format... | 2022-12-13T19:39:24 | 2023-01-04T12:45:40 | 2022-12-15T19:15:54 | In https://github.com/huggingface/datasets/pull/5084 we make the torch formatting consistent with the map-style datasets formatting: a torch formatted iterable dataset will yield torch tensors.
The previous behavior of the torch formatting for iterable dataset was simply to make the iterable dataset inherit from `to... | lhoestq | https://github.com/huggingface/datasets/pull/5357 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5357",
"html_url": "https://github.com/huggingface/datasets/pull/5357",
"diff_url": "https://github.com/huggingface/datasets/pull/5357.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5357.patch",
"merged_at": "2022-12-15T19:15... | true |
1,494,961,609 | 5,356 | Clean filesystem and logging docstrings | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-13T18:54:09 | 2022-12-14T17:25:58 | 2022-12-14T17:22:16 | This PR cleans the `Filesystems` and `Logging` docstrings. | stevhliu | https://github.com/huggingface/datasets/pull/5356 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5356",
"html_url": "https://github.com/huggingface/datasets/pull/5356",
"diff_url": "https://github.com/huggingface/datasets/pull/5356.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5356.patch",
"merged_at": "2022-12-14T17:22... | true |
1,493,076,860 | 5,355 | Clean up Table class docstrings | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-13T00:29:47 | 2022-12-13T18:17:56 | 2022-12-13T18:14:42 | This PR cleans up the `Table` class docstrings :) | stevhliu | https://github.com/huggingface/datasets/pull/5355 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5355",
"html_url": "https://github.com/huggingface/datasets/pull/5355",
"diff_url": "https://github.com/huggingface/datasets/pull/5355.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5355.patch",
"merged_at": "2022-12-13T18:14... | true |
1,492,174,125 | 5,354 | Consider using "Sequence" instead of "List" | open | [
"Hi! Linking a comment to provide more info on the issue: https://stackoverflow.com/a/39458225. This means we should replace all (most of) the occurrences of `List` with `Sequence` in function signatures.\r\n\r\n@tranhd95 Would you be interested in submitting a PR?",
"Hi all! I tried to reproduce this issue and d... | 2022-12-12T15:39:45 | 2025-09-24T03:05:51 | null | ### Feature request
Hi, please consider using `Sequence` type annotation instead of `List` in function arguments such as in [`Dataset.from_parquet()`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L1088). It leads to type checking errors, see below.
**How to reproduce**
```py
... | tranhd95 | https://github.com/huggingface/datasets/issues/5354 | null | false |
1,491,880,500 | 5,353 | Support remote file systems for `Audio` | closed | [
"Just seen https://github.com/huggingface/datasets/issues/5281"
] | 2022-12-12T13:22:13 | 2022-12-12T13:37:14 | 2022-12-12T13:37:14 | ### Feature request
Hi there!
It would be super cool if `Audio()`, and potentially other features, could read files from a remote file system.
### Motivation
Large amounts of data is often stored in buckets. `load_from_disk` is able to retrieve data from cloud storage but to my knowledge actually copies the datas... | OllieBroadhurst | https://github.com/huggingface/datasets/issues/5353 | null | false |
1,490,796,414 | 5,352 | __init__() got an unexpected keyword argument 'input_size' | open | [
"Hi @J-shel, thanks for reporting.\r\n\r\nI think the issue comes from your call to `load_dataset`. As first argument, you should pass:\r\n- either the name of your dataset (\"mrf\") if this is already published on the Hub\r\n- or the path to the loading script of your dataset (\"path/to/your/local/mrf.py\").",
"... | 2022-12-12T02:52:03 | 2022-12-19T01:38:48 | null | ### Describe the bug
I try to define a custom configuration with a input_size attribute following the instructions by "Specifying several dataset configurations" in https://huggingface.co/docs/datasets/v1.2.1/add_dataset.html
But when I load the dataset, I got an error "__init__() got an unexpected keyword argument... | J-shel | https://github.com/huggingface/datasets/issues/5352 | null | false |
1,490,659,504 | 5,351 | Do we need to implement `_prepare_split`? | closed | [
"Hi! `DatasetBuilder` is a parent class for concrete builders: `GeneratorBasedBuilder`, `ArrowBasedBuilder` and `BeamBasedBuilder`. When writing a builder script, these classes are the ones you should inherit from. And since all of them implement `_prepare_split`, you only have to implement the three methods mentio... | 2022-12-12T01:38:54 | 2022-12-20T18:20:57 | 2022-12-12T16:48:56 | ### Describe the bug
I'm not sure this is a bug or if it's just missing in the documentation, or i'm not doing something correctly, but I'm subclassing `DatasetBuilder` and getting the following error because on the `DatasetBuilder` class the `_prepare_split` method is abstract (as are the others we are required to im... | jmwoloso | https://github.com/huggingface/datasets/issues/5351 | null | false |
1,487,559,904 | 5,350 | Clean up Loading methods docstrings | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-09T22:25:30 | 2022-12-12T17:27:20 | 2022-12-12T17:24:01 | Clean up for the docstrings in Loading methods! | stevhliu | https://github.com/huggingface/datasets/pull/5350 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5350",
"html_url": "https://github.com/huggingface/datasets/pull/5350",
"diff_url": "https://github.com/huggingface/datasets/pull/5350.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5350.patch",
"merged_at": "2022-12-12T17:24... | true |
1,487,396,780 | 5,349 | Clean up remaining Main Classes docstrings | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-09T20:17:15 | 2022-12-12T17:27:17 | 2022-12-12T17:24:13 | This PR cleans up the remaining docstrings in Main Classes (`IterableDataset`, `IterableDatasetDict`, and `Features`). | stevhliu | https://github.com/huggingface/datasets/pull/5349 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5349",
"html_url": "https://github.com/huggingface/datasets/pull/5349",
"diff_url": "https://github.com/huggingface/datasets/pull/5349.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5349.patch",
"merged_at": "2022-12-12T17:24... | true |
1,486,975,626 | 5,348 | The data downloaded in the download folder of the cache does not respect `umask` | open | [
"note, that `datasets` already did some of that umask fixing in the past and also at the hub - the recent work on the hub about the same: https://github.com/huggingface/huggingface_hub/pull/1220\r\n\r\nAlso I noticed that each file has a .json counterpart and the latter always has the correct perms:\r\n\r\n```\r\n-... | 2022-12-09T15:46:27 | 2022-12-09T17:21:26 | null | ### Describe the bug
For a project on a cluster we are several users to share the same cache for the datasets library. And we have a problem with the permissions on the data downloaded in the cache.
Indeed, it seems that the data is downloaded by giving read and write permissions only to the user launching the com... | SaulLu | https://github.com/huggingface/datasets/issues/5348 | null | false |
1,486,920,261 | 5,347 | Force soundfile to return float32 instead of the default float64 | open | [
"cc @polinaeterna",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5347). All of your documentation changes will be reflected on that endpoint.",
"Cool ! Feel free to add a comment in the code to explain that and we can merge :)",
"I'm not sure if this is a good change ... | 2022-12-09T15:10:24 | 2023-01-17T16:12:49 | null | (Fixes issue #5345) | qmeeus | https://github.com/huggingface/datasets/pull/5347 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5347",
"html_url": "https://github.com/huggingface/datasets/pull/5347",
"diff_url": "https://github.com/huggingface/datasets/pull/5347.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5347.patch",
"merged_at": null
} | true |
1,486,884,983 | 5,346 | [Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem! | closed | [
"As the survey is finished, can we close this issue, @LysandreJik ?",
"Yes! I'll post a public summary on the forums shortly.",
"Is the summary available? I would be interested in reading your findings."
] | 2022-12-09T14:48:02 | 2023-06-02T20:24:44 | 2023-01-25T19:35:40 | Thanks to all of you, Datasets is just about to pass 15k stars!
Since the last survey, a lot has happened: the [diffusers](https://github.com/huggingface/diffusers), [evaluate](https://github.com/huggingface/evaluate) and [skops](https://github.com/skops-dev/skops) libraries were born. `timm` joined the Hugging Face... | LysandreJik | https://github.com/huggingface/datasets/issues/5346 | null | false |
1,486,555,384 | 5,345 | Wrong dtype for array in audio features | open | [
"After some more investigation, this is due to [this line of code](https://github.com/huggingface/datasets/blob/main/src/datasets/features/audio.py#L279). The function `sf.read(file)` should be updated to `sf.read(file, dtype=\"float32\")`\r\n\r\nIndeed, the default value in soundfile is `float64` ([see here](https... | 2022-12-09T11:05:11 | 2023-02-10T14:39:28 | null | ### Describe the bug
When concatenating/interleaving different datasets, I stumble into an error because the features can't be aligned. After some investigation, I understood that the audio arrays had different dtypes, namely `float32` and `float64`. Consequently, the datasets cannot be merged.
### Steps to repro... | qmeeus | https://github.com/huggingface/datasets/issues/5345 | null | false |
1,485,628,319 | 5,344 | Clean up Dataset and DatasetDict | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-09T00:02:08 | 2022-12-13T00:56:07 | 2022-12-13T00:53:02 | This PR cleans up the docstrings for the other half of the methods in `Dataset` and finishes `DatasetDict`. | stevhliu | https://github.com/huggingface/datasets/pull/5344 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5344",
"html_url": "https://github.com/huggingface/datasets/pull/5344",
"diff_url": "https://github.com/huggingface/datasets/pull/5344.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5344.patch",
"merged_at": "2022-12-13T00:53... | true |
1,485,297,823 | 5,343 | T5 for Q&A produces truncated sentence | closed | [] | 2022-12-08T19:48:46 | 2022-12-08T19:57:17 | 2022-12-08T19:57:17 | Dear all, I am fine-tuning T5 for Q&A task using the MedQuAD ([GitHub - abachaa/MedQuAD: Medical Question Answering Dataset of 47,457 QA pairs created from 12 NIH websites](https://github.com/abachaa/MedQuAD)) dataset. In the dataset, there are many long answers with thousands of words. I have used pytorch_lightning to... | junyongyou | https://github.com/huggingface/datasets/issues/5343 | null | false |
1,485,244,178 | 5,342 | Emotion dataset cannot be downloaded | closed | [
"Hi @cbarond there's already an open issue at https://github.com/dair-ai/emotion_dataset/issues/5, as the data seems to be missing now, so check that issue instead 👍🏻 ",
"Thanks @cbarond for reporting and @alvarobartt for pointing to the issue we opened in the author's repo.\r\n\r\nIndeed, this issue was first ... | 2022-12-08T19:07:09 | 2023-02-23T19:13:19 | 2022-12-09T10:46:11 | ### Describe the bug
The emotion dataset gives a FileNotFoundError. The full error is: `FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1`.
It was working yesterday (December 7, 2022), but stopped working today (December 8, 2022).
### Steps to reproduce the bug
... | cbarond | https://github.com/huggingface/datasets/issues/5342 | null | false |
1,484,376,644 | 5,341 | Remove tasks.json | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-08T11:04:35 | 2022-12-09T12:26:21 | 2022-12-09T12:23:20 | After discussions in https://github.com/huggingface/datasets/pull/5335 we should remove this file that is not used anymore. We should update https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts instead. | lhoestq | https://github.com/huggingface/datasets/pull/5341 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5341",
"html_url": "https://github.com/huggingface/datasets/pull/5341",
"diff_url": "https://github.com/huggingface/datasets/pull/5341.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5341.patch",
"merged_at": "2022-12-09T12:23... | true |
1,483,182,158 | 5,340 | Clean up DatasetInfo and Dataset docstrings | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-08T00:17:53 | 2022-12-08T19:33:14 | 2022-12-08T19:30:10 | This PR cleans up the docstrings for `DatasetInfo` and about half of the methods in `Dataset`. | stevhliu | https://github.com/huggingface/datasets/pull/5340 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5340",
"html_url": "https://github.com/huggingface/datasets/pull/5340",
"diff_url": "https://github.com/huggingface/datasets/pull/5340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5340.patch",
"merged_at": "2022-12-08T19:30... | true |
1,482,817,424 | 5,339 | Add Video feature, videofolder, and video-classification task | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5339). All of your documentation changes will be reflected on that endpoint.",
"@lhoestq I think I need some serious help with the tests 😅...I started this locally but it got too time consuming.\n\nOne issue I remember running... | 2022-12-07T20:48:34 | 2024-01-11T06:30:24 | 2023-10-11T09:13:11 | This PR does the following:
- Adds `Video` feature (Resolves #5225 )
- Adds `video-classification` task
- Adds `videofolder` packaged module for easy loading of local video classification datasets
TODO:
- [ ] add tests
- [ ] add docs | nateraw | https://github.com/huggingface/datasets/pull/5339 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5339",
"html_url": "https://github.com/huggingface/datasets/pull/5339",
"diff_url": "https://github.com/huggingface/datasets/pull/5339.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5339.patch",
"merged_at": null
} | true |
1,482,646,151 | 5,338 | `map()` stops every 1000 steps | closed | [
"Hi !\r\n\r\n> It starts using all the cores (I am not sure why because I did not pass num_proc)\r\n\r\nThe tokenizer uses Rust code that is multithreaded. And maybe the `feature_extractor` might run some things in parallel as well - but I'm not super familiar with its internals.\r\n\r\n> then progress bar stops at... | 2022-12-07T19:09:40 | 2025-02-14T18:10:07 | 2022-12-10T00:39:28 | ### Describe the bug
I am passing the following `prepare_dataset` function to `Dataset.map` (code is inspired from [here](https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/run_speech_recognition_seq2seq_streaming.py#L454))
```python3
def prepare_dataset(batch):
# load and res... | bayartsogt-ya | https://github.com/huggingface/datasets/issues/5338 | null | false |
1,481,692,156 | 5,337 | Support webdataset format | closed | [
"I like the idea of having `webdataset` as an optional dependency to ensure our loader generates web datasets the same way as the main project.",
"Webdataset is the one of the most popular dataset formats for large scale computer vision tasks. Upvote for this issue. ",
"Any updates on this?",
"We haven't had ... | 2022-12-07T11:32:25 | 2024-03-06T14:39:29 | 2024-03-06T14:39:28 | Webdataset is an efficient format for iterable datasets. It would be nice to support it in `datasets`, as discussed in https://github.com/rom1504/img2dataset/issues/234.
In particular it would be awesome to be able to load one using `load_dataset` in streaming mode (either from a local directory, or from a dataset o... | lhoestq | https://github.com/huggingface/datasets/issues/5337 | null | false |
1,479,649,900 | 5,336 | Set `IterableDataset.map` param `batch_size` typing as optional | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5336). All of your documentation changes will be reflected on that endpoint.",
"Hi @mariosasko, @lhoestq I was wondering whether we should include `batched` as a `pytest.mark` param for the functions testing `IterableDataset.ma... | 2022-12-06T17:08:10 | 2022-12-07T14:14:56 | 2022-12-07T14:06:27 | This PR solves #5325
~Indeed we're using the typing for optional values as `Union[type, None]` as it's similar to how Python 3.10 handles optional values as `type | None`, instead of using `Optional[type]`.~
~Do we want to start using `Union[type, None]` for type-hinting optional values or just keep on using `Op... | alvarobartt | https://github.com/huggingface/datasets/pull/5336 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5336",
"html_url": "https://github.com/huggingface/datasets/pull/5336",
"diff_url": "https://github.com/huggingface/datasets/pull/5336.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5336.patch",
"merged_at": "2022-12-07T14:06... | true |
1,478,890,788 | 5,335 | Update tasks.json | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I think the only place where we need to add it is here https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts\r\n\r\nAnd I think we can remove tasks.json completely from this repo",
"Isn't tasks.json used ... | 2022-12-06T11:37:57 | 2023-09-24T10:06:42 | 2022-12-07T12:46:03 | Context:
* https://github.com/huggingface/datasets/issues/5255#issuecomment-1339107195
Cc: @osanseviero | sayakpaul | https://github.com/huggingface/datasets/pull/5335 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5335",
"html_url": "https://github.com/huggingface/datasets/pull/5335",
"diff_url": "https://github.com/huggingface/datasets/pull/5335.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5335.patch",
"merged_at": null
} | true |
1,477,421,927 | 5,334 | Clean up docstrings | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks ! Let us know if we can help :)\r\n\r\nSmall pref for having multiple PRs",
"Awesome, thanks! Sorry this one is a little big, I'll open some smaller ones next :)"
] | 2022-12-05T20:56:08 | 2022-12-09T01:44:25 | 2022-12-09T01:41:44 | As raised by @polinaeterna in #5324, some of the docstrings are a bit of a mess because it has both Markdown and Sphinx syntax. This PR fixes the docstring for `DatasetBuilder`.
I'll start working on cleaning up the rest of the docstrings and removing the old Sphinx syntax (let me know if you prefer one big PR with... | stevhliu | https://github.com/huggingface/datasets/pull/5334 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5334",
"html_url": "https://github.com/huggingface/datasets/pull/5334",
"diff_url": "https://github.com/huggingface/datasets/pull/5334.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5334.patch",
"merged_at": "2022-12-09T01:41... | true |
1,476,890,156 | 5,333 | fix: 🐛 pass the token to get the list of config names | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-05T16:06:09 | 2022-12-06T08:25:17 | 2022-12-06T08:22:49 | Otherwise, get_dataset_infos doesn't work on gated or private datasets, even with the correct token. | severo | https://github.com/huggingface/datasets/pull/5333 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5333",
"html_url": "https://github.com/huggingface/datasets/pull/5333",
"diff_url": "https://github.com/huggingface/datasets/pull/5333.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5333.patch",
"merged_at": "2022-12-06T08:22... | true |
1,476,513,072 | 5,332 | Passing numpy array to ClassLabel names causes ValueError | closed | [
"Should `datasets` allow `ClassLabel` input parameter to be an `np.array` even though internally we need to cast it to a Python list? @lhoestq @mariosasko ",
"Hi! No, I don't think so. The `names` parameter is [annotated](https://github.com/huggingface/datasets/blob/582236640b9109988e5f7a16a8353696ffa09a16/src/d... | 2022-12-05T12:59:03 | 2022-12-22T16:32:50 | 2022-12-22T16:32:50 | ### Describe the bug
If a numpy array is passed to the names argument of ClassLabel, creating a dataset with those features causes an error.
### Steps to reproduce the bug
https://colab.research.google.com/drive/1cV_es1PWZiEuus17n-2C-w0KEoEZ68IX
TLDR:
If I define my classes as:
```
my_classes = np.array(['on... | freddyheppell | https://github.com/huggingface/datasets/issues/5332 | null | false |
1,473,146,738 | 5,331 | Support for multiple configs in packaged modules via metadata yaml info | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"feel free to merge `main` into your PR to fix the CI :)",
"Let me see if I can fix the pattern thing ^^'",
"Hmm I think it would be easier to specify the `data_files` in the end, because having a split pattern like `{split}-...` ... | 2022-12-02T16:43:44 | 2023-07-24T15:49:54 | 2023-07-13T13:27:56 | will solve https://github.com/huggingface/datasets/issues/5209 and https://github.com/huggingface/datasets/issues/5151 and many other...
Config parameters for packaged builders are parsed from `“builder_config”` field in README.md file (separate firs-level field, not part of “dataset_info”), example:
```yaml
---
... | polinaeterna | https://github.com/huggingface/datasets/pull/5331 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5331",
"html_url": "https://github.com/huggingface/datasets/pull/5331",
"diff_url": "https://github.com/huggingface/datasets/pull/5331.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5331.patch",
"merged_at": "2023-07-13T13:27... | true |
1,471,999,125 | 5,329 | Clarify imagefolder is for small datasets | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I think it's also reasonable to add the same note to the AudioFolder decription",
"Thank you ! I think \"regular\" is more appropriate than \"small\". It can easily scale to a few thousands of images - just not millions x)",
"Rep... | 2022-12-01T21:47:29 | 2022-12-06T17:20:04 | 2022-12-06T17:16:53 | Based on feedback from [here](https://github.com/huggingface/datasets/issues/5317#issuecomment-1334108824), this PR adds a note to the `imagefolder` loading and creating docs that `imagefolder` is designed for small scale image datasets. | stevhliu | https://github.com/huggingface/datasets/pull/5329 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5329",
"html_url": "https://github.com/huggingface/datasets/pull/5329",
"diff_url": "https://github.com/huggingface/datasets/pull/5329.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5329.patch",
"merged_at": "2022-12-06T17:16... | true |
1,471,661,437 | 5,328 | Fix docs building for main | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"EDIT\r\nAt least the docs for ~~main~~ PR branch are now built:\r\n- https://github.com/huggingface/datasets/actions/runs/3594847760/jobs/6053620813",
"Build documentation for main branch was triggered after this PR being merged: h... | 2022-12-01T17:07:45 | 2022-12-02T16:29:00 | 2022-12-02T16:26:00 | This PR reverts the triggering event for building documentation introduced by:
- #5250
Fix #5326. | albertvillanova | https://github.com/huggingface/datasets/pull/5328 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5328",
"html_url": "https://github.com/huggingface/datasets/pull/5328",
"diff_url": "https://github.com/huggingface/datasets/pull/5328.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5328.patch",
"merged_at": "2022-12-02T16:26... | true |
1,471,657,247 | 5,327 | Avoid unwanted behaviour when splits from script and metadata are not matching because of outdated metadata | open | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5327). All of your documentation changes will be reflected on that endpoint."
] | 2022-12-01T17:05:23 | 2023-01-23T12:48:29 | null | will fix #5315 | polinaeterna | https://github.com/huggingface/datasets/pull/5327 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5327",
"html_url": "https://github.com/huggingface/datasets/pull/5327",
"diff_url": "https://github.com/huggingface/datasets/pull/5327.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5327.patch",
"merged_at": null
} | true |
1,471,634,168 | 5,326 | No documentation for main branch is built | closed | [] | 2022-12-01T16:50:58 | 2022-12-02T16:26:01 | 2022-12-02T16:26:01 | Since:
- #5250
- Commit: 703b84311f4ead83c7f79639f2dfa739295f0be6
the docs for main branch are no longer built.
The change introduced only triggers the docs building for releases. | albertvillanova | https://github.com/huggingface/datasets/issues/5326 | null | false |
1,471,536,822 | 5,325 | map(...batch_size=None) for IterableDataset | closed | [
"Hi! I agree it makes sense for `IterableDataset.map` to support the `batch_size=None` case. This should be super easy to fix.",
"@mariosasko as this is something simple maybe I can include it as part of https://github.com/huggingface/datasets/pull/5311? Let me know :+1:",
"#self-assign",
"Feel free to close ... | 2022-12-01T15:43:42 | 2022-12-07T15:54:43 | 2022-12-07T15:54:42 | ### Feature request
Dataset.map(...) allows batch_size to be None. It would be nice if IterableDataset did too.
### Motivation
Although it may seem a bit of a spurious request given that `IterableDataset` is meant for larger than memory datasets, but there are a couple of reasons why this might be nice.
One is th... | frankier | https://github.com/huggingface/datasets/issues/5325 | null | false |
1,471,524,512 | 5,324 | Fix docstrings and types in documentation that appears on the website | open | [
"I agree we have a mess with docstrings...",
"Ok, I believe we've cleaned up most of the old syntax we were using for the user-facing docs! There are still a couple of `:obj:`'s and `:class:` floating around in the docstrings we don't expose that I'll track down :)",
"Hi @polinaeterna @albertvillanova @stevhliu... | 2022-12-01T15:34:53 | 2024-01-23T16:21:54 | null | While I was working on https://github.com/huggingface/datasets/pull/5313 I've noticed that we have a mess in how we annotate types and format args and return values in the code. And some of it is displayed in the [Reference section](https://huggingface.co/docs/datasets/package_reference/builder_classes) of the document... | polinaeterna | https://github.com/huggingface/datasets/issues/5324 | null | false |
1,471,518,803 | 5,323 | Duplicated Keys in Taskmaster-2 Dataset | closed | [
"Thanks for reporting, @liaeh.\r\n\r\nWe are having a look at it. ",
"I have transferred the discussion to the Community tab of the dataset: https://huggingface.co/datasets/taskmaster2/discussions/1"
] | 2022-12-01T15:31:06 | 2022-12-01T16:26:06 | 2022-12-01T16:26:06 | ### Describe the bug
Loading certain splits () of the taskmaster-2 dataset fails because of a DuplicatedKeysError. This occurs for the following domains: `'hotels', 'movies', 'music', 'sports'`. The domains `'flights', 'food-ordering', 'restaurant-search'` load fine.
Output:
### Steps to reproduce the bug
```
... | liaeh | https://github.com/huggingface/datasets/issues/5323 | null | false |
1,471,502,162 | 5,322 | Raise error for `.tar` archives in the same way as for `.tar.gz` and `.tgz` in `_get_extraction_protocol` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-01T15:19:28 | 2022-12-14T16:37:16 | 2022-12-14T16:33:30 | Currently `download_and_extract` doesn't throw an error when it is used with files with `.tar` extension in streaming mode because `_get_extraction_protocol` doesn't do it (like it does for `tar.gz` and `tgz`). `_get_extraction_protocol` returns formatted url as if we support tar protocol but we don't.
That means tha... | polinaeterna | https://github.com/huggingface/datasets/pull/5322 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5322",
"html_url": "https://github.com/huggingface/datasets/pull/5322",
"diff_url": "https://github.com/huggingface/datasets/pull/5322.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5322.patch",
"merged_at": "2022-12-14T16:33... | true |
1,471,430,667 | 5,321 | Fix loading from HF GCP cache | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Do you know why this stopped working?\r\n\r\nIt comes from the changes in https://github.com/huggingface/datasets/pull/5107/files#diff-355ae5c229f95f86895404b72378ecd6e966c41cbeebb674af6fe6e9611bc126"
] | 2022-12-01T14:39:06 | 2022-12-01T16:10:09 | 2022-12-01T16:07:02 | As reported in https://discuss.huggingface.co/t/error-loading-wikipedia-dataset/26599/4 it's not possible to download a cached version of Wikipedia from the HF GCP cache
I fixed it and added an integration test (runs in 10sec) | lhoestq | https://github.com/huggingface/datasets/pull/5321 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5321",
"html_url": "https://github.com/huggingface/datasets/pull/5321",
"diff_url": "https://github.com/huggingface/datasets/pull/5321.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5321.patch",
"merged_at": "2022-12-01T16:07... | true |
1,471,360,910 | 5,320 | [Extract] Place the lock file next to the destination directory | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-01T13:55:49 | 2022-12-01T15:36:44 | 2022-12-01T15:33:58 | Previously it was placed next to the archive to extract, but the archive can be in a read-only directory as noticed in https://github.com/huggingface/datasets/issues/5295
Therefore I moved the lock location to be next to the destination directory, which is required to have write permissions | lhoestq | https://github.com/huggingface/datasets/pull/5320 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5320",
"html_url": "https://github.com/huggingface/datasets/pull/5320",
"diff_url": "https://github.com/huggingface/datasets/pull/5320.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5320.patch",
"merged_at": "2022-12-01T15:33... | true |
1,470,945,515 | 5,319 | Fix Text sample_by paragraph | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-01T09:08:09 | 2022-12-01T15:21:44 | 2022-12-01T15:19:00 | Fix #5316. | albertvillanova | https://github.com/huggingface/datasets/pull/5319 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5319",
"html_url": "https://github.com/huggingface/datasets/pull/5319",
"diff_url": "https://github.com/huggingface/datasets/pull/5319.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5319.patch",
"merged_at": "2022-12-01T15:19... | true |
1,470,749,750 | 5,318 | Origin/fix missing features error | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"please review :) @lhoestq @ola13 thankoo",
"Thanks :) I just updated the test to make sure it works even when there's a column missing, and did a minor change to json.py to add the missing columns for the other kinds of JSON files ... | 2022-12-01T06:18:39 | 2022-12-12T19:06:42 | 2022-12-04T05:49:39 | This fixes the problem of when the dataset_load function reads a function with "features" provided but some read batches don't have columns that later show up. For instance, the provided "features" requires columns A,B,C but only columns B,C show. This fixes this by adding the column A with nulls. | eunseojo | https://github.com/huggingface/datasets/pull/5318 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5318",
"html_url": "https://github.com/huggingface/datasets/pull/5318",
"diff_url": "https://github.com/huggingface/datasets/pull/5318.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5318.patch",
"merged_at": "2022-12-04T05:49... | true |
1,470,390,164 | 5,317 | `ImageFolder` performs poorly with large datasets | open | [
"Hi ! ImageFolder is made for small scale datasets indeed. For large scale image datasets you better group your images in TAR archives or Arrow/Parquet files. This is true not just for ImageFolder loading performance, but also because having millions of files is not ideal for your filesystem or when moving the data... | 2022-12-01T00:04:21 | 2022-12-01T21:49:26 | null | ### Describe the bug
While testing image dataset creation, I'm seeing significant performance bottlenecks with imagefolders when scanning a directory structure with large number of images.
## Setup
* Nested directories (5 levels deep)
* 3M+ images
* 1 `metadata.jsonl` file
## Performance Degradation Point... | salieri | https://github.com/huggingface/datasets/issues/5317 | null | false |
1,470,115,681 | 5,316 | Bug in sample_by="paragraph" | closed | [
"Thanks for reporting, @adampauls.\r\n\r\nWe are having a look at it. "
] | 2022-11-30T19:24:13 | 2022-12-01T15:19:02 | 2022-12-01T15:19:02 | ### Describe the bug
I think [this line](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/text/text.py#L96) is wrong and should be `batch = f.read(self.config.chunksize)`. Otherwise it will never terminate because even when `f` is finished reading, `batch` will still be truthy from the l... | adampauls | https://github.com/huggingface/datasets/issues/5316 | null | false |
1,470,026,797 | 5,315 | Adding new splits to a dataset script with existing old splits info in metadata's `dataset_info` fails | open | [
"EDIT:\r\nI think in this case, the metadata files (either README or JSON) should not be read (i.e. `self.info.splits` should be None).\r\n\r\nOne idea: \r\n- I think ideally we should set this behavior when we pass `--save_info` to the CLI `test`\r\n- However, currently, the builder is unaware of this: `save_info`... | 2022-11-30T18:02:15 | 2022-12-02T07:02:53 | null | ### Describe the bug
If you first create a custom dataset with a specific set of splits, generate metadata with `datasets-cli test ... --save_info`, then change your script to include more splits, it fails.
That's what happened in https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/discussions/2#6385f... | polinaeterna | https://github.com/huggingface/datasets/issues/5315 | null | false |
1,469,685,118 | 5,314 | Datasets: classification_report() got an unexpected keyword argument 'suffix' | closed | [
"This seems similar to https://github.com/huggingface/datasets/issues/2512 Can you try to update seqeval ? ",
"@JonathanAlis also note that the metrics are deprecated in our `datasets` library.\r\n\r\nPlease, use the new library 🤗 Evaluate instead: https://huggingface.co/docs/evaluate"
] | 2022-11-30T14:01:03 | 2023-07-21T14:40:31 | 2023-07-21T14:40:31 | https://github.com/huggingface/datasets/blob/main/metrics/seqeval/seqeval.py
> import datasets
predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
seqeval = datasets.load_metri... | JonathanAlis | https://github.com/huggingface/datasets/issues/5314 | null | false |
1,468,484,136 | 5,313 | Fix description of streaming in the docs | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-29T18:00:28 | 2022-12-01T14:55:30 | 2022-12-01T14:00:34 | We say that "the data is being downloaded progressively" which is not true, it's just streamed, so I fixed it. Probably I missed some other places where it is written?
Also changed docstrings for `StreamingDownloadManager`'s `download` and `extract` to reflect the same, as these docstrings are displayed in the docu... | polinaeterna | https://github.com/huggingface/datasets/pull/5313 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5313",
"html_url": "https://github.com/huggingface/datasets/pull/5313",
"diff_url": "https://github.com/huggingface/datasets/pull/5313.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5313.patch",
"merged_at": "2022-12-01T14:00... | true |
1,468,352,562 | 5,312 | Add DatasetDict.to_pandas | closed | [
"The current implementation is what I had in mind, i.e. concatenate all splits by default.\r\n\r\nHowever, I think most tabular datasets would come as a single split. So for that usecase, it wouldn't change UX if we raise when there are more than one splits.\r\n\r\nAnd for multiple splits, the user either passes a ... | 2022-11-29T16:30:02 | 2023-09-24T10:06:19 | 2023-01-25T17:33:42 | From discussions in https://github.com/huggingface/datasets/issues/5189, for tabular data it doesn't really make sense to have to do
```python
df = load_dataset(...)["train"].to_pandas()
```
because many datasets are not split.
In this PR I added `to_pandas` to `DatasetDict` which returns the DataFrame:
If th... | lhoestq | https://github.com/huggingface/datasets/pull/5312 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5312",
"html_url": "https://github.com/huggingface/datasets/pull/5312",
"diff_url": "https://github.com/huggingface/datasets/pull/5312.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5312.patch",
"merged_at": null
} | true |
1,467,875,153 | 5,311 | Add `features` param to `IterableDataset.map` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-29T11:08:34 | 2022-12-06T15:45:02 | 2022-12-06T15:42:04 | ## Description
As suggested by @lhoestq in #3888, we should be adding the param `features` to `IterableDataset.map` so that the features can be preserved (not turned into `None` as that's the default behavior) whenever the user passes those as param, so as to be consistent with `Dataset.map`, as it provides the `fea... | alvarobartt | https://github.com/huggingface/datasets/pull/5311 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5311",
"html_url": "https://github.com/huggingface/datasets/pull/5311",
"diff_url": "https://github.com/huggingface/datasets/pull/5311.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5311.patch",
"merged_at": "2022-12-06T15:42... | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.