id int64 599M 3.48B | number int64 1 7.8k | title stringlengths 1 290 | state stringclasses 2
values | comments listlengths 0 30 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-10-05 06:37:50 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-10-05 10:32:43 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-10-01 13:56:03 ⌀ | body stringlengths 0 228k ⌀ | user stringlengths 3 26 | html_url stringlengths 46 51 | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,659,837,510 | 5,722 | Distributed Training Error on Customized Dataset | closed | [
"Hmm the error doesn't seem related to data loading.\r\n\r\nRegarding `split_dataset_by_node`: it's generally used to split an iterable dataset (e.g. when streaming) in pytorch DDP. It's not needed if you use a regular dataset since the pytorch DataLoader already assigns a subset of the dataset indices to each node... | 2023-04-09T11:04:59 | 2023-07-24T14:50:46 | 2023-07-24T14:50:46 | Hi guys, recently I tried to use `datasets` to train a dual encoder.
I finish my own datasets according to the nice [tutorial](https://huggingface.co/docs/datasets/v2.11.0/en/dataset_script)
Here are my code:
```python
class RetrivalDataset(datasets.GeneratorBasedBuilder):
"""CrossEncoder dataset."""
B... | wlhgtc | https://github.com/huggingface/datasets/issues/5722 | null | false |
1,659,680,682 | 5,721 | Calling datasets.load_dataset("text" ...) results in a wrong split. | open | [] | 2023-04-08T23:55:12 | 2023-04-08T23:55:12 | null | ### Describe the bug
When creating a text dataset, the training split should have the bulk of the examples by default. Currently, testing does.
### Steps to reproduce the bug
I have a folder with 18K text files in it. Each text file essentially consists in a document or article scraped from online. Calling the follo... | cyrilzakka | https://github.com/huggingface/datasets/issues/5721 | null | false |
1,659,610,705 | 5,720 | Streaming IterableDatasets do not work with torch DataLoaders | open | [
"Edit: This behavior is true even without `.take/.set`",
"I'm experiencing the same problem that @jlehrer1. I was able to reproduce it with a very small example:\r\n\r\n```py\r\nfrom datasets import Dataset, load_dataset, load_dataset_builder\r\nfrom torch.utils.data import DataLoader\r\n\r\n\r\ndef my_gen():\r\n... | 2023-04-08T18:45:48 | 2025-03-19T14:06:47 | null | ### Describe the bug
When using streaming datasets set up with train/val split using `.skip()` and `.take()`, the following error occurs when iterating over a torch dataloader:
```
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 363, in __iter__
self.... | jlehrer1 | https://github.com/huggingface/datasets/issues/5720 | null | false |
1,659,203,222 | 5,719 | Array2D feature creates a list of list instead of a numpy array | closed | [
"Hi! \r\n\r\nYou need to set the format to `np` before indexing the dataset to get NumPy arrays:\r\n```python\r\nfeatures = Features(dict(seq=Array2D((2,2), 'float32'))) \r\nds = Dataset.from_dict(dict(seq=[np.random.rand(2,2)]), features=features)\r\nds.set_format(\"np\")\r\na = ds[0]['seq']\r\n```\r\n\r\n> I th... | 2023-04-07T21:04:08 | 2023-04-20T15:34:41 | 2023-04-20T15:34:41 | ### Describe the bug
I'm not sure if this is expected behavior or not. When I create a 2D array using `Array2D`, the data has list type instead of numpy array. I think it should not be the expected behavior especially when I feed a numpy array as input to the data creation function. Why is it converting my array int... | offchan42 | https://github.com/huggingface/datasets/issues/5719 | null | false |
1,658,958,406 | 5,718 | Reorder default data splits to have validation before test | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"After this CI error: https://github.com/huggingface/datasets/actions/runs/4639528358/jobs/8210492953?pr=5718\r\n```\r\nFAILED tests/test_data_files.py::test_get_data_files_patterns[data_file_per_split4] - AssertionError: assert ['ran... | 2023-04-07T16:01:26 | 2023-04-27T14:43:13 | 2023-04-27T14:35:52 | This PR reorders data splits, so that by default validation appears before test.
The default order becomes: [train, validation, test] instead of [train, test, validation]. | albertvillanova | https://github.com/huggingface/datasets/pull/5718 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5718",
"html_url": "https://github.com/huggingface/datasets/pull/5718",
"diff_url": "https://github.com/huggingface/datasets/pull/5718.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5718.patch",
"merged_at": "2023-04-27T14:35... | true |
1,658,729,866 | 5,717 | Errror when saving to disk a dataset of images | open | [
"Looks like as long as the number of shards makes a batch lower than 1000 images it works. In my training set I have 40K images. If I use `num_shards=40` (batch of 1000 images) I get the error, but if I update it to `num_shards=50` (batch of 800 images) it works.\r\n\r\nI will be happy to share my dataset privately... | 2023-04-07T11:59:17 | 2025-07-13T08:27:47 | null | ### Describe the bug
Hello!
I have an issue when I try to save on disk my dataset of images. The error I get is:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1442, in save_... | jplu | https://github.com/huggingface/datasets/issues/5717 | null | false |
1,658,613,092 | 5,716 | Handle empty audio | closed | [
"Hi! Can you share one of the problematic audio files with us?\r\n\r\nI tried to reproduce the error with the following code: \r\n```python\r\nimport soundfile as sf\r\nimport numpy as np\r\nfrom datasets import Audio\r\n\r\nsf.write(\"empty.wav\", np.array([]), 16000)\r\nAudio(sampling_rate=24000).decode_example(... | 2023-04-07T09:51:40 | 2023-09-27T17:47:08 | 2023-09-27T17:47:08 | Some audio paths exist, but they are empty, and an error will be reported when reading the audio path.How to use the filter function to avoid the empty audio path?
when a audio is empty, when do resample , it will break:
`array, sampling_rate = sf.read(f) array = librosa.resample(array, orig_sr=sampling_rate, target_... | ben-8543 | https://github.com/huggingface/datasets/issues/5716 | null | false |
1,657,479,788 | 5,715 | Return Numpy Array (fixed length) Mode, in __get_item__, Instead of List | closed | [
"Hi! \r\n\r\nYou can use [`.set_format(\"np\")`](https://huggingface.co/docs/datasets/process#format) to get NumPy arrays (or Pytorch tensors with `.set_format(\"torch\")`) in `__getitem__`.\r\n\r\nAlso, have you been able to reproduce the linked PyTorch issue with a HF dataset?\r\n "
] | 2023-04-06T13:57:48 | 2023-04-20T17:16:26 | 2023-04-20T17:16:26 | ### Feature request
There are old known issues, but they can be easily forgettable problems in multiprocessing with pytorch-dataloader:
Too high usage of RAM or shared-memory in pytorch when we set num workers > 1 and returning type of dataset or dataloader is "List" or "Dict".
https://github.com/pytorch/pytorch... | jungbaepark | https://github.com/huggingface/datasets/issues/5715 | null | false |
1,657,388,033 | 5,714 | Fix xnumpy_load for .npz files | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-04-06T13:01:45 | 2023-04-07T09:23:54 | 2023-04-07T09:16:57 | PR:
- #5626
implemented support for streaming `.npy` files by using `numpy.load`.
However, it introduced a bug when used with `.npz` files, within a context manager:
```
ValueError: seek of closed file
```
or in streaming mode:
```
ValueError: I/O operation on closed file.
```
This PR fixes the bug an... | albertvillanova | https://github.com/huggingface/datasets/pull/5714 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5714",
"html_url": "https://github.com/huggingface/datasets/pull/5714",
"diff_url": "https://github.com/huggingface/datasets/pull/5714.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5714.patch",
"merged_at": "2023-04-07T09:16... | true |
1,657,141,251 | 5,713 | ArrowNotImplementedError when loading dataset from the hub | closed | [
"Hi Julien ! This sounds related to https://github.com/huggingface/datasets/issues/5695 - TL;DR: you need to have shards smaller than 2GB to avoid this issue\r\n\r\nThe number of rows per shard is computed using an estimated size of the full dataset, which can sometimes lead to shards bigger than `max_shard_size`. ... | 2023-04-06T10:27:22 | 2023-04-06T13:06:22 | 2023-04-06T13:06:21 | ### Describe the bug
Hello,
I have created a dataset by using the image loader. Once the dataset is created I try to download it and I get the error:
```
Traceback (most recent call last):
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_... | jplu | https://github.com/huggingface/datasets/issues/5713 | null | false |
1,655,972,106 | 5,712 | load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load() | closed | [
"Closing since this is a duplicate of #5711",
"> Closing since this is a duplicate of #5711\r\n\r\nSorry @mariosasko , my internet went down went submitting the issue, and somehow it ended up creating a duplicate"
] | 2023-04-05T16:47:10 | 2023-04-06T08:32:37 | 2023-04-05T17:17:44 | ### Describe the bug
Hi,
I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1.
```python
ds = datasets.load_dataset(path=dataset_dir,
name=configuration,
data_dir=dataset_dir,
... | rcasero | https://github.com/huggingface/datasets/issues/5712 | null | false |
1,655,971,647 | 5,711 | load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load() | closed | [
"It seems like https://github.com/huggingface/datasets/pull/5626 has introduced this error. \r\n\r\ncc @albertvillanova \r\n\r\nI think replacing:\r\nhttps://github.com/huggingface/datasets/blob/0803a006db1c395ac715662cc6079651f77c11ea/src/datasets/download/streaming_download_manager.py#L777-L778\r\nwith:\r\n```pyt... | 2023-04-05T16:46:49 | 2023-04-07T09:16:59 | 2023-04-07T09:16:59 | ### Describe the bug
Hi,
I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1.
```python
ds = datasets.load_dataset(path=dataset_dir,
name=configuration,
data_dir=dataset_dir,
... | rcasero | https://github.com/huggingface/datasets/issues/5711 | null | false |
1,655,703,534 | 5,710 | OSError: Memory mapping file failed: Cannot allocate memory | closed | [
"Hi! This error means that PyArrow's internal [`mmap`](https://man7.org/linux/man-pages/man2/mmap.2.html) call failed to allocate memory, which can be tricky to debug. Since this error is more related to PyArrow than us, I think it's best to report this issue in their [repo](https://github.com/apache/arrow) (they a... | 2023-04-05T14:11:26 | 2023-04-20T17:16:40 | 2023-04-20T17:16:40 | ### Describe the bug
Hello, I have a series of datasets each of 5 GB, 600 datasets in total. So together this makes 3TB.
When I trying to load all the 600 datasets into memory, I get the above error message.
Is this normal because I'm hitting the max size of memory mapping of the OS?
Thank you
```te... | Saibo-creator | https://github.com/huggingface/datasets/issues/5710 | null | false |
1,655,423,503 | 5,709 | Manually dataset info made not taken into account | closed | [
"hi @jplu ! Did I understand you correctly that you create the dataset, push it to the Hub with `.push_to_hub` and you see a `dataset_infos.json` file there, then you edit this file, load the dataset with `load_dataset` and you don't see any changes in `.info` attribute of a dataset object? \r\n\r\nThis is actually... | 2023-04-05T11:15:17 | 2023-04-06T08:52:20 | 2023-04-06T08:52:19 | ### Describe the bug
Hello,
I'm manually building an image dataset with the `from_dict` approach. I also build the features with the `cast_features` methods. Once the dataset is created I push it on the hub, and a default `dataset_infos.json` file seems to have been automatically added to the repo in same time. Hen... | jplu | https://github.com/huggingface/datasets/issues/5709 | null | false |
1,655,023,642 | 5,708 | Dataset sizes are in MiB instead of MB in dataset cards | closed | [
"Example of bulk edit: https://huggingface.co/datasets/aeslc/discussions/5",
"looks great! \r\n\r\nDo you encode the fact that you've already converted a dataset? (to not convert it twice) or do you base yourself on the info contained in `dataset_info`",
"I am only looping trough the dataset cards, assuming tha... | 2023-04-05T06:36:03 | 2023-12-21T10:20:28 | 2023-12-21T10:20:27 | As @severo reported in an internal discussion (https://github.com/huggingface/moon-landing/issues/5929):
Now we show the dataset size:
- from the dataset card (in the side column)
- from the datasets-server (in the viewer)
But, even if the size is the same, we see a mismatch because the viewer shows MB, while t... | albertvillanova | https://github.com/huggingface/datasets/issues/5708 | null | false |
1,653,545,835 | 5,706 | Support categorical data types for Parquet | closed | [
"Hi ! We could definitely a type that holds the categories and uses a DictionaryType storage. There's a ClassLabel type that is similar with a 'names' parameter (similar to a id2label in deep learning frameworks) that uses an integer array as storage.\r\n\r\nIt can be added in `features.py`. Here are some pointers:... | 2023-04-04T09:45:35 | 2024-06-07T12:20:43 | 2024-06-07T12:20:43 | ### Feature request
Huggingface datasets does not seem to support categorical / dictionary data types for Parquet as of now. There seems to be a `TODO` in the code for this feature but no implementation yet. Below you can find sample code to reproduce the error that is currently thrown when attempting to read a Parq... | kklemon | https://github.com/huggingface/datasets/issues/5706 | null | false |
1,653,500,383 | 5,705 | Getting next item from IterableDataset took forever. | closed | [
"Hi! It can take some time to iterate over Parquet files as big as yours, convert the samples to Python, and find the first one that matches a filter predicate before yielding it...",
"Thanks @mariosasko, I figured it was the filter operation. I'm closing this issue because it is not a bug, it is the expected beh... | 2023-04-04T09:16:17 | 2023-04-05T23:35:41 | 2023-04-05T23:35:41 | ### Describe the bug
I have a large dataset, about 500GB. The format of the dataset is parquet.
I then load the dataset and try to get the first item
```python
def get_one_item():
dataset = load_dataset("path/to/datafiles", split="train", cache_dir=".", streaming=True)
dataset = dataset.filter(lambda... | HongtaoYang | https://github.com/huggingface/datasets/issues/5705 | null | false |
1,653,471,356 | 5,704 | 5537 speedup load | open | [
"Awesome ! cc @mariosasko :)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5704). All of your documentation changes will be reflected on that endpoint.",
"Hi, thanks for working on this!\r\n\r\nYour solution only works if the `root` is `\"\"`, e.g., this would yield an... | 2023-04-04T08:58:14 | 2023-04-07T16:10:55 | null | I reimplemented fsspec.spec.glob() in `hffilesystem.py` as `_glob`, used it in `_resolve_single_pattern_in_dataset_repository` only, and saw a 20% speedup in times to load the config, on average.
That's not much when usually this step takes only 2-3 seconds for most datasets, but in this particular case, `bigcode... | semajyllek | https://github.com/huggingface/datasets/pull/5704 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5704",
"html_url": "https://github.com/huggingface/datasets/pull/5704",
"diff_url": "https://github.com/huggingface/datasets/pull/5704.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5704.patch",
"merged_at": null
} | true |
1,653,158,955 | 5,703 | [WIP][Test, Please ignore] Investigate performance impact of using multiprocessing only | closed | [
"`multiprocess` uses `dill` instead of `pickle` for pickling shared objects and, as such, can pickle more types than `multiprocessing`. And I don't think this is something we want to change :).",
"That makes sense to me, and I don't think you should merge this change. I was only curious about the performance impa... | 2023-04-04T04:37:49 | 2023-04-20T03:17:37 | 2023-04-20T03:17:32 | null | hvaara | https://github.com/huggingface/datasets/pull/5703 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5703",
"html_url": "https://github.com/huggingface/datasets/pull/5703",
"diff_url": "https://github.com/huggingface/datasets/pull/5703.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5703.patch",
"merged_at": null
} | true |
1,653,104,720 | 5,702 | Is it possible or how to define a `datasets.Sequence` that could potentially be either a dict, a str, or None? | closed | [
"Hi ! `datasets` uses Apache Arrow as backend to store the data, and it requires each column to have a fixed type. Therefore a column can't have a mix of dicts/lists/strings.\r\n\r\nThough it's possible to have one (nullable) field for each type:\r\n```python\r\nfeatures = Features({\r\n \"text_alone\": Value(\"... | 2023-04-04T03:20:43 | 2023-04-05T14:15:18 | 2023-04-05T14:15:17 | ### Feature request
Hello! Apologies if my question sounds naive:
I was wondering if it’s possible, or how one would go about defining a 'datasets.Sequence' element in datasets.Features that could potentially be either a dict, a str, or None?
Specifically, I’d like to define a feature for a list that contains 18... | gitforziio | https://github.com/huggingface/datasets/issues/5702 | null | false |
1,652,931,399 | 5,701 | Add Dataset.from_spark | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@mariosasko Would you or another HF datasets maintainer be able to review this, please?",
"Amazing ! Great job @maddiedawson \r\n\r\nDo you know if it's possible to also support writing to Parquet using the HF ParquetWriter if `fil... | 2023-04-03T23:51:29 | 2023-06-16T16:39:32 | 2023-04-26T15:43:39 | Adds static method Dataset.from_spark to create datasets from Spark DataFrames.
This approach alleviates users of the need to materialize their dataframe---a common use case is that the user loads their dataset into a dataframe, uses Spark to apply some transformation to some of the columns, and then wants to train ... | maddiedawson | https://github.com/huggingface/datasets/pull/5701 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5701",
"html_url": "https://github.com/huggingface/datasets/pull/5701",
"diff_url": "https://github.com/huggingface/datasets/pull/5701.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5701.patch",
"merged_at": "2023-04-26T15:43... | true |
1,652,527,530 | 5,700 | fix: fix wrong modification of the 'cache_file_name' -related paramet… | open | [
"Have you tried to set the cache file names if `keep_in_memory`is True ?\r\n\r\n```diff\r\n- if self.cache_files:\r\n+ if self.cache_files and not keep_in_memory:\r\n```\r\n\r\nThis way it doesn't change the indice cache arguments and leave them as `None`",
"@lhoestq \r\nRegarding what you suggest:\r\nThe thing i... | 2023-04-03T18:05:26 | 2023-04-06T17:17:27 | null | …ers values in 'train_test_split' + fix bad interaction between 'keep_in_memory' and 'cache_file_name' -related parameters (#5699) | FrancoisNoyez | https://github.com/huggingface/datasets/pull/5700 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5700",
"html_url": "https://github.com/huggingface/datasets/pull/5700",
"diff_url": "https://github.com/huggingface/datasets/pull/5700.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5700.patch",
"merged_at": null
} | true |
1,652,437,419 | 5,699 | Issue when wanting to split in memory a cached dataset | open | [
"Hi ! Good catch, this is wrong indeed and thanks for opening a PR :)",
"Facing the same issue. Kindly fix this bug."
] | 2023-04-03T17:00:07 | 2024-05-15T13:12:18 | null | ### Describe the bug
**In the 'train_test_split' method of the Dataset class** (defined datasets/arrow_dataset.py), **if 'self.cache_files' is not empty**, then, **regarding the input parameters 'train_indices_cache_file_name' and 'test_indices_cache_file_name', if they are None**, we modify them to make them not No... | FrancoisNoyez | https://github.com/huggingface/datasets/issues/5699 | null | false |
1,652,183,611 | 5,698 | Add Qdrant as another search index | open | [
"@mariosasko I'd appreciate your feedback on this. "
] | 2023-04-03T14:25:19 | 2023-04-11T10:28:40 | null | ### Feature request
I'd suggest adding Qdrant (https://qdrant.tech) as another search index available, so users can directly build an index from a dataset. Currently, FAISS and ElasticSearch are only supported: https://huggingface.co/docs/datasets/faiss_es
### Motivation
ElasticSearch is a keyword-based search syst... | kacperlukawski | https://github.com/huggingface/datasets/issues/5698 | null | false |
1,651,812,614 | 5,697 | Raise an error on missing distributed seed | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-04-03T10:44:58 | 2023-04-04T15:05:24 | 2023-04-04T14:58:16 | close https://github.com/huggingface/datasets/issues/5696 | lhoestq | https://github.com/huggingface/datasets/pull/5697 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5697",
"html_url": "https://github.com/huggingface/datasets/pull/5697",
"diff_url": "https://github.com/huggingface/datasets/pull/5697.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5697.patch",
"merged_at": "2023-04-04T14:58... | true |
1,651,707,008 | 5,696 | Shuffle a sharded iterable dataset without seed can lead to duplicate data | closed | [] | 2023-04-03T09:40:03 | 2023-04-04T14:58:18 | 2023-04-04T14:58:18 | As reported in https://github.com/huggingface/datasets/issues/5360
If `seed=None` in `.shuffle()`, shuffled datasets don't use the same shuffling seed across nodes.
Because of that, the lists of shards is not shuffled the same way across nodes, and therefore some shards may be assigned to multiple nodes instead o... | lhoestq | https://github.com/huggingface/datasets/issues/5696 | null | false |
1,650,974,156 | 5,695 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError | closed | [
"Hi ! It looks like an issue with PyArrow: https://issues.apache.org/jira/browse/ARROW-5030\r\n\r\nIt appears it can happen when you have parquet files with row groups larger than 2GB.\r\nI can see that your parquet files are around 10GB. It is usually advised to keep a value around the default value 500MB to avoid... | 2023-04-02T14:42:44 | 2024-05-15T12:04:47 | 2023-04-10T08:04:04 | ### Describe the bug
Calling `datasets.load_dataset` to load the (publicly available) dataset `theodor1289/wit` fails with `pyarrow.lib.ArrowNotImplementedError`.
### Steps to reproduce the bug
Steps to reproduce this behavior:
1. `!pip install datasets`
2. `!huggingface-cli login`
3. This step will throw the e... | amariucaitheodor | https://github.com/huggingface/datasets/issues/5695 | null | false |
1,650,467,793 | 5,694 | Dataset configuration | open | [
"Originally we also though about adding it to the YAML part of the README.md:\r\n\r\n```yaml\r\nbuilder_config:\r\n data_dir: data\r\n data_files:\r\n - split: train\r\n pattern: \"train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*\"\r\n```\r\n\r\nHaving it in the README.md could make it easier to mod... | 2023-04-01T13:08:05 | 2023-04-04T14:54:37 | null | Following discussions from https://github.com/huggingface/datasets/pull/5331
We could have something like `config.json` to define the configuration of a dataset.
```json
{
"data_dir": "data"
"data_files": {
"train": "train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*"
}
}
```
... | lhoestq | https://github.com/huggingface/datasets/issues/5694 | null | false |
1,649,934,749 | 5,693 | [docs] Split pattern search order | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-31T19:51:38 | 2023-04-03T18:43:30 | 2023-04-03T18:29:58 | This PR addresses #5681 about the order of split patterns 🤗 Datasets searches for when generating dataset splits. | stevhliu | https://github.com/huggingface/datasets/pull/5693 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5693",
"html_url": "https://github.com/huggingface/datasets/pull/5693",
"diff_url": "https://github.com/huggingface/datasets/pull/5693.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5693.patch",
"merged_at": "2023-04-03T18:29... | true |
1,649,818,644 | 5,692 | pyarrow.lib.ArrowInvalid: Unable to merge: Field <field> has incompatible types | open | [
"Hi! The link pointing to the code that generated the dataset is broken. Can you please fix it to make debugging easier?",
"> Hi! The link pointing to the code that generated the dataset is broken. Can you please fix it to make debugging easier?\r\n\r\nSorry about that, it's fixed now.\r\n",
"@cyanic-selkie cou... | 2023-03-31T18:19:40 | 2024-01-14T07:24:21 | null | ### Describe the bug
When loading the dataset [wikianc-en](https://huggingface.co/datasets/cyanic-selkie/wikianc-en) which I created using [this](https://github.com/cyanic-selkie/wikianc) code, I get the following error:
```
Traceback (most recent call last):
File "/home/sven/code/rector/answer-detection/trai... | cyanic-selkie | https://github.com/huggingface/datasets/issues/5692 | null | false |
1,649,737,526 | 5,691 | [docs] Compress data files | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"[Confirmed](https://huggingface.slack.com/archives/C02EMARJ65P/p1680541667004199) with the Hub team the file size limit for the Hugging Face Hub is 10MB :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<deta... | 2023-03-31T17:17:26 | 2023-04-19T13:37:32 | 2023-04-19T07:25:58 | This PR addresses the comments in #5687 about compressing text file extensions before uploading to the Hub. Also clarified what "too large" means based on the GitLFS [docs](https://docs.github.com/en/repositories/working-with-files/managing-large-files/about-git-large-file-storage). | stevhliu | https://github.com/huggingface/datasets/pull/5691 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5691",
"html_url": "https://github.com/huggingface/datasets/pull/5691",
"diff_url": "https://github.com/huggingface/datasets/pull/5691.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5691.patch",
"merged_at": "2023-04-19T07:25... | true |
1,648,956,349 | 5,689 | Support streaming Beam datasets from HF GCS preprocessed data | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"wikipedia\", \"20220301.en\", split=\"train\", streaming=True); item = next(iter(ds)); item\r\nOut[2]: \r\n{'id': '12',\r\n 'url': 'https://en.... | 2023-03-31T08:44:24 | 2023-04-12T05:57:55 | 2023-04-12T05:50:31 | This PR implements streaming Apache Beam datasets that are already preprocessed by us and stored in the HF Google Cloud Storage:
- natural_questions
- wiki40b
- wikipedia
This is done by streaming from the prepared Arrow files in HF Google Cloud Storage.
This will fix their corresponding dataset viewers. Relat... | albertvillanova | https://github.com/huggingface/datasets/pull/5689 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5689",
"html_url": "https://github.com/huggingface/datasets/pull/5689",
"diff_url": "https://github.com/huggingface/datasets/pull/5689.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5689.patch",
"merged_at": "2023-04-12T05:50... | true |
1,649,289,883 | 5,690 | raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api | closed | [
"Hi @wccccp, thanks for reporting. \r\nThat's weird since `huggingface_hub` _has_ a module called `hf_api` and you are using a recent version of it. \r\n\r\nWhich version of `datasets` are you using? And is it a bug that you experienced only recently? (cc @lhoestq can it be somehow related to the recent release of ... | 2023-03-31T08:22:22 | 2023-07-21T14:21:57 | 2023-07-21T14:21:57 | ### Describe the bug
rta.sh
Traceback (most recent call last):
File "run.py", line 7, in <module>
import datasets
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module>
from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, Dat... | wccccp | https://github.com/huggingface/datasets/issues/5690 | null | false |
1,648,463,504 | 5,688 | Wikipedia download_and_prepare for GCS | closed | [
"Hi @adrianfagerland, thanks for reporting.\r\n\r\nPlease note that \"wikipedia\" is a special dataset, with an Apache Beam builder: https://beam.apache.org/\r\nYou can find more info about Beam datasets in our docs: https://huggingface.co/docs/datasets/beam\r\n\r\nIt was implemented to be run in parallel processin... | 2023-03-30T23:43:22 | 2024-03-15T15:59:18 | 2024-03-15T15:59:18 | ### Describe the bug
I am unable to download the wikipedia dataset onto GCS.
When I run the script provided the memory firstly gets eaten up, then it crashes.
I tried running this on a VM with 128GB RAM and all I got was a two empty files: _data_builder.lock_, _data.incomplete/beam-temp-wikipedia-train-1ab2039a... | adrianfagerland | https://github.com/huggingface/datasets/issues/5688 | null | false |
1,647,009,018 | 5,687 | Document to compress data files before uploading | closed | [
"Great idea!\r\n\r\nShould we also take this opportunity to include some audio/image file formats? Currently, it still reads very text heavy. Something like:\r\n\r\n> We support many text, audio, and image data extensions such as `.zip`, `.rar`, `.mp3`, and `.jpg` among many others. For data extensions like `.csv`,... | 2023-03-30T06:41:07 | 2023-04-19T07:25:59 | 2023-04-19T07:25:59 | In our docs to [Share a dataset to the Hub](https://huggingface.co/docs/datasets/upload_dataset), we tell users to upload directly their data files, like CSV, JSON, JSON-Lines, text,... However, these extensions are not tracked by Git LFS by default, as they are not in the `.giattributes` file. Therefore, if they are t... | albertvillanova | https://github.com/huggingface/datasets/issues/5687 | null | false |
1,646,308,228 | 5,686 | set dev version | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5686). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | 2023-03-29T18:24:13 | 2023-03-29T18:33:49 | 2023-03-29T18:24:22 | null | lhoestq | https://github.com/huggingface/datasets/pull/5686 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5686",
"html_url": "https://github.com/huggingface/datasets/pull/5686",
"diff_url": "https://github.com/huggingface/datasets/pull/5686.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5686.patch",
"merged_at": "2023-03-29T18:24... | true |
1,646,048,667 | 5,685 | Broken Image render on the hub website | closed | [
"Hi! \r\n\r\nYou can fix the viewer by adding the `dataset_info` YAML field deleted in https://huggingface.co/datasets/Francesco/cell-towers/commit/b95b59ddd91ebe9c12920f0efe0ed415cd0d4298 back to the metadata section of the card. \r\n\r\nTo avoid this issue in the feature, you can use `huggingface_hub`'s [RepoCard... | 2023-03-29T15:25:30 | 2023-03-30T07:54:25 | 2023-03-30T07:54:25 | ### Describe the bug
Hi :wave:
Not sure if this is the right place to ask, but I am trying to load a huge amount of datasets on the hub (:partying_face: ) but I am facing a little issue with the `image` type
",
"Closed in #5693 "
] | 2023-03-29T11:44:49 | 2023-04-03T18:31:11 | 2023-04-03T18:31:11 | Following [this](https://github.com/huggingface/datasets/issues/5650) issue I think we should add a note about the order of patterns that is used to find splits, see [my comment](https://github.com/huggingface/datasets/issues/5650#issuecomment-1488412527). Also we should reference this page in pages about packaged load... | polinaeterna | https://github.com/huggingface/datasets/issues/5681 | null | false |
1,645,430,103 | 5,680 | Fix a description error for interleave_datasets. | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_a... | 2023-03-29T09:50:23 | 2023-03-30T13:14:19 | 2023-03-30T13:07:18 | There is a description mistake in the annotation of interleave_dataset with "all_exhausted" stopping_strategy.
``` python
d1 = Dataset.from_dict({"a": [0, 1, 2]})
d2 = Dataset.from_dict({"a": [10, 11, 12, 13]})
d3 = Dataset.from_dict({"a": [20, 21, 22, 23, 24]})
dataset = interleave_datasets([d1, d2, d3], stopping... | QizhiPei | https://github.com/huggingface/datasets/pull/5680 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5680",
"html_url": "https://github.com/huggingface/datasets/pull/5680",
"diff_url": "https://github.com/huggingface/datasets/pull/5680.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5680.patch",
"merged_at": "2023-03-30T13:07... | true |
1,645,184,622 | 5,679 | Allow load_dataset to take a working dir for intermediate data | open | [
"Hi ! AFAIK a dataset must be present on a local disk to be able to efficiently memory map the datasets Arrow files. What makes you think that it is possible to load from a cloud storage and have good performance ?\r\n\r\nAnyway it's already possible to download_and_prepare a dataset as Arrow files in a cloud stora... | 2023-03-29T07:21:09 | 2023-04-12T22:30:25 | null | ### Feature request
As a user, I can set a working dir for intermediate data creation. The processed files will be moved to the cache dir, like
```
load_dataset(…, working_dir=”/temp/dir”, cache_dir=”/cloud_dir”).
```
### Motivation
This will help the use case for using datasets with cloud storage as cache. It wi... | lu-wang-dl | https://github.com/huggingface/datasets/issues/5679 | null | false |
1,645,018,359 | 5,678 | Add support to create a Dataset from spark dataframe | closed | [
"if i read spark Dataframe , got an error on multi-node Spark cluster.\r\nDid the Api (Dataset.from_spark) support Spark cluster, read dataframe and save_to_disk?\r\n\r\nError: \r\n_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a b... | 2023-03-29T04:36:28 | 2024-08-27T14:43:19 | 2023-07-21T14:15:38 | ### Feature request
Add a new API `Dataset.from_spark` to create a Dataset from Spark DataFrame.
### Motivation
Spark is a distributed computing framework that can handle large datasets. By supporting loading Spark DataFrames directly into Hugging Face Datasets, we enable take the advantages of spark to processing t... | lu-wang-dl | https://github.com/huggingface/datasets/issues/5678 | null | false |
1,644,828,606 | 5,677 | Dataset.map() crashes when any column contains more than 1000 empty dictionaries | closed | [] | 2023-03-29T00:01:31 | 2023-07-07T14:01:14 | 2023-07-07T14:01:14 | ### Describe the bug
`Dataset.map()` crashes any time any column contains more than `writer_batch_size` (default 1000) empty dictionaries, regardless of whether the column is being operated on. The error does not occur if the dictionaries are non-empty.
### Steps to reproduce the bug
Example:
```
import datasets... | mtoles | https://github.com/huggingface/datasets/issues/5677 | null | false |
1,641,763,478 | 5,675 | Filter datasets by language code | closed | [
"The dataset still can be found, if instead of using the search form you just enter the language code in the url, like https://huggingface.co/datasets?language=language:myv. \r\n\r\nBut of course having a more complete list of languages in the search form (or just a fallback to the language codes, if they are missi... | 2023-03-27T09:42:28 | 2023-03-30T08:08:15 | 2023-03-30T08:08:15 | Hi! I use the language search field on https://huggingface.co/datasets
However, some of the datasets tagged by ISO language code are not accessible by this search form.
For example, [myv_ru_2022](https://huggingface.co/datasets/slone/myv_ru_2022) is has `myv` language tag but it is not included in Languages search fo... | named-entity | https://github.com/huggingface/datasets/issues/5675 | null | false |
1,641,084,105 | 5,674 | Stored XSS | closed | [
"Hi! You can contact `[email protected]` to report this vulnerability."
] | 2023-03-26T20:55:58 | 2024-04-30T22:56:41 | 2023-03-27T21:01:55 | x | Fadavvi | https://github.com/huggingface/datasets/issues/5674 | null | false |
1,641,066,352 | 5,673 | Pass down storage options | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> download_and_prepare is not called when streaming a dataset, so we may need to have storage_options in the DatasetBuilder.__init__ ? This way it could also be passed later to as_streaming_dataset and the StreamingDownloadManager\r\... | 2023-03-26T20:09:37 | 2023-03-28T15:03:38 | 2023-03-28T14:54:17 | Remove implementation-specific kwargs from `file_utils.fsspec_get` and `file_utils.fsspec_head`, instead allowing them to be passed down via `storage_options`. This fixes an issue where s3fs did not recognize a timeout arg as well as fixes an issue mentioned in https://github.com/huggingface/datasets/issues/5281 by all... | dwyatte | https://github.com/huggingface/datasets/pull/5673 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5673",
"html_url": "https://github.com/huggingface/datasets/pull/5673",
"diff_url": "https://github.com/huggingface/datasets/pull/5673.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5673.patch",
"merged_at": "2023-03-28T14:54... | true |
1,641,005,322 | 5,672 | Pushing dataset to hub crash | closed | [
"Hi ! It's been fixed by https://github.com/huggingface/datasets/pull/5598. We're doing a new release tomorrow with the fix and you'll be able to push your 100k images ;)\r\n\r\nBasically `push_to_hub` used to fail if the remote repository already exists and has a README.md without dataset_info in the YAML tags.\r\... | 2023-03-26T17:42:13 | 2023-03-30T08:11:05 | 2023-03-30T08:11:05 | ### Describe the bug
Uploading a dataset with `push_to_hub()` fails without error description.
### Steps to reproduce the bug
Hey there,
I've built a image dataset of 100k images + text pair as described here https://huggingface.co/docs/datasets/image_dataset#imagefolder
Now I'm trying to push it to the hub b... | tzvc | https://github.com/huggingface/datasets/issues/5672 | null | false |
1,640,840,012 | 5,671 | How to use `load_dataset('glue', 'cola')` | closed | [
"Sounds like an issue with incompatible `transformers` dependencies versions.\r\n\r\nCan you try to update `transformers` ?\r\n\r\nEDIT: I checked the `transformers` dependencies and it seems like you need `tokenizers>=0.10.1,<0.11` with `transformers==4.5.1`\r\n\r\nEDIT2: this old version of `datasets` seems to im... | 2023-03-26T09:40:34 | 2023-03-28T07:43:44 | 2023-03-28T07:43:43 | ### Describe the bug
I'm new to use HuggingFace datasets but I cannot use `load_dataset('glue', 'cola')`.
- I was stacked by the following problem:
```python
from datasets import load_dataset
cola_dataset = load_dataset('glue', 'cola')
------------------------------------------------------------------------... | makinzm | https://github.com/huggingface/datasets/issues/5671 | null | false |
1,640,607,045 | 5,670 | Unable to load multi class classification datasets | closed | [
"Hi ! This sounds related to https://github.com/huggingface/datasets/issues/5406\r\n\r\nUpdating `datasets` fixes the issue ;)",
"Thanks @lhoestq!\r\n\r\nI'll close this issue now."
] | 2023-03-25T18:06:15 | 2023-03-27T22:54:56 | 2023-03-27T22:54:56 | ### Describe the bug
I've been playing around with huggingface library, mostly with `datasets` and wanted to download the multi class classification datasets to fine tune BERT on this task. ([link](https://huggingface.co/docs/transformers/training#train-with-pytorch-trainer)).
While loading the dataset, I'm getting... | ysahil97 | https://github.com/huggingface/datasets/issues/5670 | null | false |
1,638,070,046 | 5,669 | Almost identical datasets, huge performance difference | open | [
"Do I miss something here?",
"Hi! \r\n\r\nThe first dataset stores images as bytes (the \"image\" column type is `datasets.Image()`) and decodes them as `PIL.Image` objects and the second dataset stores them as variable-length lists (the \"image\" column type is `datasets.Sequence(...)`)), so I guess going from `... | 2023-03-23T18:20:20 | 2023-04-09T18:56:23 | null | ### Describe the bug
I am struggling to understand (huge) performance difference between two datasets that are almost identical.
### Steps to reproduce the bug
# Fast (normal) dataset speed:
```python
import cv2
from datasets import load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset(... | eli-osherovich | https://github.com/huggingface/datasets/issues/5669 | null | false |
1,638,018,598 | 5,668 | Support for downloading only provided split | open | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5668). All of your documentation changes will be reflected on that endpoint.",
"My previous comment didn't create the retro-link in the PR. I write it here again.\r\n\r\nYou can check the context and the discussions we had abou... | 2023-03-23T17:53:39 | 2023-03-24T06:43:14 | null | We can pass split to `_split_generators()`.
But I'm not sure if it's possible to solve cache issues, mostly with `dataset_info.json` | polinaeterna | https://github.com/huggingface/datasets/pull/5668 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5668",
"html_url": "https://github.com/huggingface/datasets/pull/5668",
"diff_url": "https://github.com/huggingface/datasets/pull/5668.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5668.patch",
"merged_at": null
} | true |
1,637,789,361 | 5,667 | Jax requires jaxlib | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-23T15:41:09 | 2023-03-23T16:23:11 | 2023-03-23T16:14:52 | close https://github.com/huggingface/datasets/issues/5666 | lhoestq | https://github.com/huggingface/datasets/pull/5667 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5667",
"html_url": "https://github.com/huggingface/datasets/pull/5667",
"diff_url": "https://github.com/huggingface/datasets/pull/5667.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5667.patch",
"merged_at": "2023-03-23T16:14... | true |
1,637,675,062 | 5,666 | Support tensorflow 2.12.0 in CI | closed | [] | 2023-03-23T14:37:51 | 2023-03-23T16:14:54 | 2023-03-23T16:14:54 | Once we find out the root cause of:
- #5663
we should revert the temporary pin on tensorflow introduced by:
- #5664 | albertvillanova | https://github.com/huggingface/datasets/issues/5666 | null | false |
1,637,193,648 | 5,665 | Feature request: IterableDataset.push_to_hub | closed | [
"+1",
"+1",
"+1, should be possible now? :) https://huggingface.co/blog/xethub-joins-hf",
"Haha we're working hard to integrate Xet in the HF back-end, it will enable cool use cases :)\n\nAnyway about `IterableDataset.push_to_hub`, I'd be happy to to provide guidance and answer questions if anyone wants to st... | 2023-03-23T09:53:04 | 2025-06-06T16:13:22 | 2025-06-06T16:12:36 | ### Feature request
It'd be great to have a lazy push to hub, similar to the lazy loading we have with `IterableDataset`.
Suppose you'd like to filter [LAION](https://huggingface.co/datasets/laion/laion400m) based on certain conditions, but as LAION doesn't fit into your disk, you'd like to leverage streaming:
`... | NielsRogge | https://github.com/huggingface/datasets/issues/5665 | null | false |
1,637,192,684 | 5,664 | Fix CI by temporarily pinning tensorflow < 2.12.0 | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-23T09:52:26 | 2023-03-23T10:17:11 | 2023-03-23T10:09:54 | As a hotfix for our CI, temporarily pin `tensorflow` upper version:
- In Python 3.10, tensorflow-2.12.0 also installs `jax`
Fix #5663
Until root cause is fixed. | albertvillanova | https://github.com/huggingface/datasets/pull/5664 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5664",
"html_url": "https://github.com/huggingface/datasets/pull/5664",
"diff_url": "https://github.com/huggingface/datasets/pull/5664.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5664.patch",
"merged_at": "2023-03-23T10:09... | true |
1,637,173,248 | 5,663 | CI is broken: ModuleNotFoundError: jax requires jaxlib to be installed | closed | [] | 2023-03-23T09:39:43 | 2023-03-23T10:09:55 | 2023-03-23T10:09:55 | CI test_py310 is broken: see https://github.com/huggingface/datasets/actions/runs/4498945505/jobs/7916194236?pr=5662
```
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_in_memory - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installati... | albertvillanova | https://github.com/huggingface/datasets/issues/5663 | null | false |
1,637,140,813 | 5,662 | Fix unnecessary dict comprehension | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I am merging because the CI error is unrelated.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | re... | 2023-03-23T09:18:58 | 2023-03-23T09:46:59 | 2023-03-23T09:37:49 | After ruff-0.0.258 release, the C416 rule was updated with unnecessary dict comprehensions. See:
- https://github.com/charliermarsh/ruff/releases/tag/v0.0.258
- https://github.com/charliermarsh/ruff/pull/3605
This PR fixes one unnecessary dict comprehension in our code: no need to unpack and re-pack the tuple valu... | albertvillanova | https://github.com/huggingface/datasets/pull/5662 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5662",
"html_url": "https://github.com/huggingface/datasets/pull/5662",
"diff_url": "https://github.com/huggingface/datasets/pull/5662.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5662.patch",
"merged_at": "2023-03-23T09:37... | true |
1,637,129,445 | 5,661 | CI is broken: Unnecessary `dict` comprehension | closed | [] | 2023-03-23T09:13:01 | 2023-03-23T09:37:51 | 2023-03-23T09:37:51 | CI check_code_quality is broken:
```
src/datasets/arrow_dataset.py:3267:35: C416 [*] Unnecessary `dict` comprehension (rewrite using `dict()`)
Found 1 error.
``` | albertvillanova | https://github.com/huggingface/datasets/issues/5661 | null | false |
1,635,543,646 | 5,660 | integration with imbalanced-learn | closed | [
"You can convert any dataset to pandas to be used with imbalanced-learn using `.to_pandas()`\r\n\r\nOtherwise if you want to keep a `Dataset` object and still use e.g. [make_imbalance](https://imbalanced-learn.org/stable/references/generated/imblearn.datasets.make_imbalance.html#imblearn.datasets.make_imbalance), y... | 2023-03-22T11:05:17 | 2023-07-06T18:10:15 | 2023-07-06T18:10:15 | ### Feature request
Wouldn't it be great if the various class balancing operations from imbalanced-learn were available as part of datasets?
### Motivation
I'm trying to use imbalanced-learn to balance a dataset, but it's not clear how to get the two to interoperate - what would be great would be some examples. I'v... | tansaku | https://github.com/huggingface/datasets/issues/5660 | null | false |
1,635,447,540 | 5,659 | [Audio] Soundfile/libsndfile requirements too stringent for decoding mp3 files | closed | [
"cc @polinaeterna @lhoestq ",
"@sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). \r\nRequired `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume... | 2023-03-22T10:07:33 | 2024-07-12T01:35:01 | 2023-04-07T08:51:28 | ### Describe the bug
I'm encountering several issues trying to load mp3 audio files using `datasets` on a TPU v4.
The PR https://github.com/huggingface/datasets/pull/5573 updated the audio loading logic to rely solely on the `soundfile`/`libsndfile` libraries for loading audio samples, regardless of their file t... | sanchit-gandhi | https://github.com/huggingface/datasets/issues/5659 | null | false |
1,634,867,204 | 5,658 | docs: Update num_shards docs to mention num_proc on Dataset and DatasetDict | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-22T00:12:18 | 2023-03-24T16:43:34 | 2023-03-24T16:36:21 | Closes #5653
@mariosasko | connor-henderson | https://github.com/huggingface/datasets/pull/5658 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5658",
"html_url": "https://github.com/huggingface/datasets/pull/5658",
"diff_url": "https://github.com/huggingface/datasets/pull/5658.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5658.patch",
"merged_at": "2023-03-24T16:36... | true |
1,634,156,563 | 5,656 | Fix `fsspec.open` when using an HTTP proxy | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-21T15:23:29 | 2023-03-23T14:14:50 | 2023-03-23T13:15:46 | Most HTTP(S) downloads from this library support proxy automatically by reading the `HTTP_PROXY` environment variable (et al.) because `requests` is widely used. However, in some parts of the code, `fsspec` is used, which in turn uses `aiohttp` for HTTP(S) requests (as opposed to `requests`), which in turn doesn't supp... | bryant1410 | https://github.com/huggingface/datasets/pull/5656 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5656",
"html_url": "https://github.com/huggingface/datasets/pull/5656",
"diff_url": "https://github.com/huggingface/datasets/pull/5656.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5656.patch",
"merged_at": "2023-03-23T13:15... | true |
1,634,030,017 | 5,655 | Improve features decoding in to_iterable_dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-21T14:18:09 | 2023-03-23T13:19:27 | 2023-03-23T13:12:25 | Following discussion at https://github.com/huggingface/datasets/pull/5589
Right now `to_iterable_dataset` on images/audio hurts iterable dataset performance a lot (e.g. x4 slower because it encodes+decodes images/audios unnecessarily).
I fixed it by providing a generator that yields undecoded examples | lhoestq | https://github.com/huggingface/datasets/pull/5655 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5655",
"html_url": "https://github.com/huggingface/datasets/pull/5655",
"diff_url": "https://github.com/huggingface/datasets/pull/5655.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5655.patch",
"merged_at": "2023-03-23T13:12... | true |
1,633,523,705 | 5,654 | Offset overflow when executing Dataset.map | open | [
"Upd. the above code works if we replace `25` with `1`, but the result value at key \"hr\" is not a tensor but a list of lists of lists of uint8.\r\n\r\nAdding `train_data.set_format(\"torch\")` after map fixes this, but the original issue remains\r\n\r\n",
"As a workaround, one can replace\r\n`return {\"hr\": to... | 2023-03-21T09:33:27 | 2023-03-21T10:32:07 | null | ### Describe the bug
Hi, I'm trying to use `.map` method to cache multiple random crops from the image to speed up data processing during training, as the image size is too big.
The map function executes all iterations, and then returns the following error:
```bash
Traceback (most recent call last): ... | jan-pair | https://github.com/huggingface/datasets/issues/5654 | null | false |
1,633,254,159 | 5,653 | Doc: save_to_disk, `num_proc` will affect `num_shards`, but it's not documented | closed | [
"I agree this should be documented"
] | 2023-03-21T05:25:35 | 2023-03-24T16:36:23 | 2023-03-24T16:36:23 | ### Describe the bug
[`num_proc`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_proc) will affect `num_shards`, but it's not documented
### Steps to reproduce the bug
Nothing to reproduce
### Expected behavior
[document of `num_shards`](https://... | RmZeta2718 | https://github.com/huggingface/datasets/issues/5653 | null | false |
1,632,546,073 | 5,652 | Copy features | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-20T17:17:23 | 2023-03-23T13:19:19 | 2023-03-23T13:12:08 | Some users (even internally at HF) are doing
```python
dset_features = dset.features
dset_features.pop(col_to_remove)
dset = dset.map(..., features=dset_features)
```
Right now this causes issues because it modifies the features dict in place before the map.
In this PR I modified `dset.features` to return a ... | lhoestq | https://github.com/huggingface/datasets/pull/5652 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5652",
"html_url": "https://github.com/huggingface/datasets/pull/5652",
"diff_url": "https://github.com/huggingface/datasets/pull/5652.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5652.patch",
"merged_at": "2023-03-23T13:12... | true |
1,631,967,509 | 5,651 | expanduser in save_to_disk | closed | [
"`save_to_disk` should indeed expand `~`. Marking it as a \"good first issue\".",
"#self-assign\r\n\r\nFile path to code: \r\n\r\nhttps://github.com/huggingface/datasets/blob/2.13.0/src/datasets/arrow_dataset.py#L1364\r\n\r\n@RmZeta2718 I created a pull request for this issue. ",
"Hello, \r\nIt says `save_to_di... | 2023-03-20T12:02:18 | 2023-10-27T14:04:37 | 2023-10-27T14:04:37 | ### Describe the bug
save_to_disk() does not expand `~`
1. `dataset = load_datasets("any dataset")`
2. `dataset.save_to_disk("~/data")`
3. a folder named "~" created in current folder
4. FileNotFoundError is raised, because the expanded path does not exist (`/home/<user>/data`)
related issue https://github.... | RmZeta2718 | https://github.com/huggingface/datasets/issues/5651 | null | false |
1,630,336,919 | 5,650 | load_dataset can't work correct with my image data | closed | [
"Can you post a reproducible code snippet of what you tried to do?\r\n\r\n",
"> Can you post a reproducible code snippet of what you tried to do?\n> \n> \n\n```python\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"my_folder_name\", split=\"train\")\n```",
"hi @WiNE-iNEFF ! can you please also te... | 2023-03-18T13:59:13 | 2023-07-24T14:13:02 | 2023-07-24T14:13:01 | I have about 20000 images in my folder which divided into 4 folders with class names.
When i use load_dataset("my_folder_name", split="train") this function create dataset in which there are only 4 images, the remaining 19000 images were not added there. What is the problem and did not understand. Tried converting imag... | WiNE-iNEFF | https://github.com/huggingface/datasets/issues/5650 | null | false |
1,630,173,460 | 5,649 | The index column created with .to_sql() is dependent on the batch_size when writing | closed | [
"Thanks for reporting, @lsb. \r\n\r\nWe are investigating it.\r\n\r\nOn the other hand, please note that in the next `datasets` release, the index will not be created by default (see #5583). If you would like to have it, you will need to explicitly pass `index=True`. ",
"I think this is low enough priority for me... | 2023-03-18T05:25:17 | 2023-06-17T07:01:57 | 2023-06-17T07:01:57 | ### Describe the bug
It seems like the "index" column is designed to be unique? The values are only unique per batch. The SQL index is not a unique index.
This can be a problem, for instance, when building a faiss index on a dataset and then trying to match up ids with a sql export.
### Steps to reproduce the ... | lsb | https://github.com/huggingface/datasets/issues/5649 | null | false |
1,629,253,719 | 5,648 | flatten_indices doesn't work with pandas format | open | [
"Thanks for reporting! This can be fixed by setting the format to `arrow` in `flatten_indices` and restoring the original format after the flattening. I'm working on a PR that reduces the number of the `flatten_indices` calls in our codebase and makes `flatten_indices` a no-op when a dataset does not have an indice... | 2023-03-17T12:44:25 | 2023-03-21T13:12:03 | null | ### Describe the bug
Hi,
I noticed that `flatten_indices` throws an error when the batch format is `pandas`. This is probably due to the fact that flatten_indices uses map internally which doesn't accept dataframes as the transformation function output
### Steps to reproduce the bug
tabular_data = pd.DataFrame(np.r... | alialamiidrissi | https://github.com/huggingface/datasets/issues/5648 | null | false |
1,628,225,544 | 5,647 | Make all print statements optional | closed | [
"related to #5444 ",
"We now log these messages instead of printing them (addressed in #6019), so I'm closing this issue."
] | 2023-03-16T20:30:07 | 2023-07-21T14:20:25 | 2023-07-21T14:20:24 | ### Feature request
Make all print statements optional to speed up the development
### Motivation
Im loading multiple tiny datasets and all the print statements make the loading slower
### Your contribution
I can help contribute | gagan3012 | https://github.com/huggingface/datasets/issues/5647 | null | false |
1,627,838,762 | 5,646 | Allow self as key in `Features` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-16T16:17:03 | 2023-03-16T17:21:58 | 2023-03-16T17:14:50 | Fix #5641 | mariosasko | https://github.com/huggingface/datasets/pull/5646 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5646",
"html_url": "https://github.com/huggingface/datasets/pull/5646",
"diff_url": "https://github.com/huggingface/datasets/pull/5646.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5646.patch",
"merged_at": "2023-03-16T17:14... | true |
1,627,108,278 | 5,645 | Datasets map and select(range()) is giving dill error | closed | [
"It looks like an error that we observed once in https://github.com/huggingface/datasets/pull/5166\r\n\r\nCan you try to update `datasets` ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\nif it doesn't work, can you make sure you don't have packages installed that may modify `dill`'s behavior, such as `apache-... | 2023-03-16T10:01:28 | 2023-03-17T04:24:51 | 2023-03-17T04:24:51 | ### Describe the bug
I'm using Huggingface Datasets library to load the dataset in google colab
When I do,
> data = train_dataset.select(range(10))
or
> train_datasets = train_dataset.map(
> process_data_to_model_inputs,
> batched=True,
> batch_size=batch_size,
> remove_columns... | Tanya-11 | https://github.com/huggingface/datasets/issues/5645 | null | false |
1,626,204,046 | 5,644 | Allow direct cast from binary to Audio/Image | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-15T20:02:54 | 2023-03-16T14:20:44 | 2023-03-16T14:12:55 | To address https://github.com/huggingface/datasets/discussions/5593.
| mariosasko | https://github.com/huggingface/datasets/pull/5644 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5644",
"html_url": "https://github.com/huggingface/datasets/pull/5644",
"diff_url": "https://github.com/huggingface/datasets/pull/5644.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5644.patch",
"merged_at": "2023-03-16T14:12... | true |
1,626,160,220 | 5,643 | Support PyArrow arrays as column values in `from_dict` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-15T19:32:40 | 2023-03-16T17:23:06 | 2023-03-16T17:15:40 | For consistency with `pa.Table.from_pydict`, which supports both Python lists and PyArrow arrays as column values.
"Fixes" https://discuss.huggingface.co/t/pyarrow-lib-floatarray-did-not-recognize-python-value-type-when-inferring-an-arrow-data-type/33417 | mariosasko | https://github.com/huggingface/datasets/pull/5643 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5643",
"html_url": "https://github.com/huggingface/datasets/pull/5643",
"diff_url": "https://github.com/huggingface/datasets/pull/5643.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5643.patch",
"merged_at": "2023-03-16T17:15... | true |
1,626,043,177 | 5,642 | Bump hfh to 0.11.0 | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-15T18:26:07 | 2023-03-20T12:34:09 | 2023-03-20T12:26:58 | to fix errors like
```
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/...
```
(e.g. from this [failing CI](https://github.com/huggingface/datasets/actions/runs/4428956210/jobs/7769160997))
0.11.0 is the current mini... | lhoestq | https://github.com/huggingface/datasets/pull/5642 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5642",
"html_url": "https://github.com/huggingface/datasets/pull/5642",
"diff_url": "https://github.com/huggingface/datasets/pull/5642.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5642.patch",
"merged_at": "2023-03-20T12:26... | true |
1,625,942,730 | 5,641 | Features cannot be named "self" | closed | [] | 2023-03-15T17:16:40 | 2023-03-16T17:14:51 | 2023-03-16T17:14:51 | ### Describe the bug
Hi,
I noticed that we cannot create a HuggingFace dataset from Pandas DataFrame with a column named `self`.
The error seems to be coming from arguments validation in the `Features.from_dict` function.
### Steps to reproduce the bug
```python
import datasets
dummy_pandas = pd.DataFrame([0... | alialamiidrissi | https://github.com/huggingface/datasets/issues/5641 | null | false |
1,625,896,057 | 5,640 | Less zip false positives | closed | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-03-15T16:48:59 | 2023-03-16T13:47:37 | 2023-03-16T13:40:12 | `zipfile.is_zipfile` return false positives for some Parquet files. It causes errors when loading certain parquet datasets, where some files are considered ZIP files by `zipfile.is_zipfile`
This is a known issue: https://github.com/python/cpython/issues/72680
At first I wanted to rely only on magic numbers, but t... | lhoestq | https://github.com/huggingface/datasets/pull/5640 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5640",
"html_url": "https://github.com/huggingface/datasets/pull/5640",
"diff_url": "https://github.com/huggingface/datasets/pull/5640.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5640.patch",
"merged_at": "2023-03-16T13:40... | true |
1,625,737,098 | 5,639 | Parquet file wrongly recognized as zip prevents loading a dataset | closed | [] | 2023-03-15T15:20:45 | 2023-03-16T13:40:14 | 2023-03-16T13:40:14 | ### Describe the bug
When trying to `load_dataset_builder` for `HuggingFaceGECLM/StackExchange_Mar2023`, extraction fails, because parquet file [devops-00000-of-00001-22fe902fd8702892.parquet](https://huggingface.co/datasets/HuggingFaceGECLM/StackExchange_Mar2023/resolve/1f8c9a2ab6f7d0f9ae904b8b922e4384592ae1a5/data... | clefourrier | https://github.com/huggingface/datasets/issues/5639 | null | false |
1,625,564,471 | 5,638 | xPath to implement all operations for Path | closed | [
" I think https://github.com/fsspec/universal_pathlib is the project you are looking for.\r\n\r\n`xPath` has the methods often used in dataset scripts, and `mkdir` is not one of them (`dl_manager`'s role is to \"interact\" with the file system, so using `mkdir` is discouraged).",
"Right is there a difference betw... | 2023-03-15T13:47:11 | 2023-03-17T13:21:12 | 2023-03-17T13:21:12 | ### Feature request
Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which only work locally.
### Motivation
I'm using... | thomasw21 | https://github.com/huggingface/datasets/issues/5638 | null | false |
1,625,295,691 | 5,637 | IterableDataset with_format does not support 'device' keyword for jax | open | [
"Hi! Yes, only `torch` is currently supported. Unlike `Dataset`, `IterableDataset` is not PyArrow-backed, so we cannot simply call `to_numpy` on the underlying subtables to format them numerically. Instead, we must manually convert examples to (numeric) arrays while preserving consistency with `Dataset`, which is n... | 2023-03-15T11:04:12 | 2025-01-07T06:59:33 | null | ### Describe the bug
As seen here: https://huggingface.co/docs/datasets/use_with_jax dataset.with_format() supports the keyword 'device', to put data on a specific device when loaded as jax. However, when called on an IterableDataset, I got the error `TypeError: with_format() got an unexpected keyword argument 'devi... | Lime-Cakes | https://github.com/huggingface/datasets/issues/5637 | null | false |
1,623,721,577 | 5,636 | Fix CI: ignore C901 ("some_func" is to complex) in `ruff` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-14T15:29:11 | 2023-03-14T16:37:06 | 2023-03-14T16:29:52 | idk if I should have added this ignore to `ruff` too, but I added :) | polinaeterna | https://github.com/huggingface/datasets/pull/5636 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5636",
"html_url": "https://github.com/huggingface/datasets/pull/5636",
"diff_url": "https://github.com/huggingface/datasets/pull/5636.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5636.patch",
"merged_at": "2023-03-14T16:29... | true |
1,623,682,558 | 5,635 | Pass custom metadata filename to Image/Audio folders | open | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5635). All of your documentation changes will be reflected on that endpoint.",
"I'm not a big fan of this new param - I find assigning metadata files to splits via the `data_files` param cleaner. Also, assuming that the metadat... | 2023-03-14T15:08:16 | 2023-03-22T17:50:31 | null | This is a quick fix.
Now it requires to pass data via `data_files` parameters and include a required metadata file there and pass its filename as `metadata_filename` parameter.
For example, with the structure like:
```
data
images_dir/
im1.jpg
im2.jpg
...
metadata_dir/
meta_file... | polinaeterna | https://github.com/huggingface/datasets/pull/5635 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5635",
"html_url": "https://github.com/huggingface/datasets/pull/5635",
"diff_url": "https://github.com/huggingface/datasets/pull/5635.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5635.patch",
"merged_at": null
} | true |
1,622,424,174 | 5,634 | Not all progress bars are showing up when they should for downloading dataset | closed | [
"Hi! \r\n\r\nBy default, tqdm has `leave=True` to \"keep all traces of the progress bar upon the termination of iteration\". However, we use `leave=False` in some places (as of recently), which removes the bar once the iteration is over.\r\n\r\nI feel like our TQDM bars are noisy, so I think we should always set `l... | 2023-03-13T23:04:18 | 2023-10-11T16:30:16 | 2023-10-11T16:30:16 | ### Describe the bug
During downloading the rotten tomatoes dataset, not all progress bars are displayed properly. This might be related to [this ticket](https://github.com/huggingface/datasets/issues/5117) as it raised the same concern but its not clear if the fix solves this issue too.
ipywidgets
<img width=... | garlandz-db | https://github.com/huggingface/datasets/issues/5634 | null | false |
1,621,469,970 | 5,633 | Cannot import datasets | closed | [
"Okay, the issue was likely caused by mixing `conda` and `pip` usage - I forgot that I have already used `pip` in this environment previously and that it was 'spoiled' because of it. Creating another environment and installing `datasets` by pip with other packages from the `requirements.txt` file solved the problem... | 2023-03-13T13:14:44 | 2023-03-13T17:54:19 | 2023-03-13T17:54:19 | ### Describe the bug
Hi,
I cannot even import the library :( I installed it by running:
```
$ conda install datasets
```
Then I realized I should maybe use the huggingface channel, because I encountered the error below, so I ran:
```
$ conda remove datasets
$ conda install -c huggingface datasets
```
Pl... | ruplet | https://github.com/huggingface/datasets/issues/5633 | null | false |
1,621,177,391 | 5,632 | Dataset cannot convert too large dictionnary | open | [
"Answered on the forum:\r\n\r\n> To fix the overflow error, we need to merge [support LargeListArray in pyarrow by xwwwwww · Pull Request #4800 · huggingface/datasets · GitHub](https://github.com/huggingface/datasets/pull/4800), which adds support for the large lists. However, before merging it, we need to come up ... | 2023-03-13T10:14:40 | 2023-03-16T15:28:57 | null | ### Describe the bug
Hello everyone!
I tried to build a new dataset with the command "dict_valid = datasets.Dataset.from_dict({'input_values': values_array})".
However, I have a very large dataset (~400Go) and it seems that dataset cannot handle this.
Indeed, I can create the dataset until a certain size of m... | MaraLac | https://github.com/huggingface/datasets/issues/5632 | null | false |
1,620,442,854 | 5,631 | Custom split names | closed | [
"Hi!\r\n\r\nYou can also use names other than \"train\", \"validation\" and \"test\". As an example, check the [script](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/blob/e095840f23f3dffc1056c078c2f9320dad9ca74d/common_voice_11_0.py#L139) of the Common Voice 11 dataset. "
] | 2023-03-12T17:21:43 | 2023-03-24T14:13:00 | 2023-03-24T14:13:00 | ### Feature request
Hi,
I participated in multiple NLP tasks where there are more than just train, test, validation splits, there could be multiple validation sets or test sets. But it seems currently only those mentioned three splits supported. It would be nice to have the support for more splits on the hub. (curren... | ErfanMoosaviMonazzah | https://github.com/huggingface/datasets/issues/5631 | null | false |
1,620,327,510 | 5,630 | adds early exit if url is `PathLike` | open | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5630). All of your documentation changes will be reflected on that endpoint."
] | 2023-03-12T11:23:28 | 2023-03-15T11:58:38 | null | Closes #4864
Should fix errors thrown when attempting to load `json` dataset using `pathlib.Path` in `data_files` argument. | vvvm23 | https://github.com/huggingface/datasets/pull/5630 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5630",
"html_url": "https://github.com/huggingface/datasets/pull/5630",
"diff_url": "https://github.com/huggingface/datasets/pull/5630.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5630.patch",
"merged_at": null
} | true |
1,619,921,247 | 5,629 | load_dataset gives "403" error when using Financial phrasebank | open | [
"Hi! You seem to be using an outdated version of `datasets` that downloads the older script version. To avoid the error, you can either pass `revision=\"main\"` to `load_dataset` (this can fail if a script uses newer features of the lib) or update your installation with `pip install -U datasets` (better solution)."... | 2023-03-11T07:46:39 | 2023-03-13T18:27:26 | null | When I try to load this dataset, I receive the following error:
ConnectionError: Couldn't reach https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip (error 403)
Has this been seen before? Thanks. The website loads ... | Jimchoo91 | https://github.com/huggingface/datasets/issues/5629 | null | false |
1,619,641,810 | 5,628 | add kwargs to index search | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2023-03-10T21:24:58 | 2023-03-15T14:48:47 | 2023-03-15T14:46:04 | This PR proposes to add kwargs to index search methods.
This is particularly useful for setting the timeout of a query on elasticsearch.
A typical use case would be:
```python
dset.add_elasticsearch_index("filename", es_client=es_client)
scores, examples = dset.get_nearest_examples("filename", "my_name-train_2... | SaulLu | https://github.com/huggingface/datasets/pull/5628 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5628",
"html_url": "https://github.com/huggingface/datasets/pull/5628",
"diff_url": "https://github.com/huggingface/datasets/pull/5628.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5628.patch",
"merged_at": "2023-03-15T14:46... | true |
1,619,336,609 | 5,627 | Unable to load AutoTrain-generated dataset from the hub | open | [
"The AutoTrain format is not supported right now. I think it would require a dedicated dataset builder",
"Okay, good to know. Thanks for the reply. For now I will just have to\nmanage the split manually before training, because I can’t find any way of\npulling out file indices or file names from the autogenerated... | 2023-03-10T17:25:58 | 2023-03-11T15:44:42 | null | ### Describe the bug
DatasetGenerationError: An error occurred while generating the dataset -> ValueError: Couldn't cast ... because column names don't match
```
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
... | ijmiller2 | https://github.com/huggingface/datasets/issues/5627 | null | false |
1,619,252,984 | 5,626 | Support streaming datasets with numpy.load | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-10T16:33:39 | 2023-03-21T06:36:05 | 2023-03-21T06:28:54 | Support streaming datasets with `numpy.load`.
See: https://huggingface.co/datasets/qgallouedec/gia_dataset/discussions/1 | albertvillanova | https://github.com/huggingface/datasets/pull/5626 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5626",
"html_url": "https://github.com/huggingface/datasets/pull/5626",
"diff_url": "https://github.com/huggingface/datasets/pull/5626.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5626.patch",
"merged_at": "2023-03-21T06:28... | true |
1,618,971,855 | 5,625 | Allow "jsonl" data type signifier | open | [
"You can use \"json\" instead. It doesn't work by extension names, but rather by dataset builder names, e.g. \"text\", \"imagefolder\", etc. I don't think the example in `transformers` is correct because of that",
"Yes, I understand the reasoning but this issue is to propose that the example in transformers (whil... | 2023-03-10T13:21:48 | 2023-03-11T10:35:39 | null | ### Feature request
`load_dataset` currently does not accept `jsonl` as type but only `json`.
### Motivation
I was working with one of the `run_translation` scripts and used my own datasets (`.jsonl`) as train_dataset. But the default code did not work because
```
FileNotFoundError: Couldn't find a dataset scri... | BramVanroy | https://github.com/huggingface/datasets/issues/5625 | null | false |
1,617,400,192 | 5,624 | glue datasets returning -1 for test split | closed | [
"Hi @lithafnium, thanks for reporting.\r\n\r\nPlease note that you can use the \"Community\" tab in the corresponding dataset page to start any discussion: https://huggingface.co/datasets/glue/discussions\r\n\r\nIndeed this issue was already raised there (https://huggingface.co/datasets/glue/discussions/5) and answ... | 2023-03-09T14:47:18 | 2023-03-09T16:49:29 | 2023-03-09T16:49:29 | ### Describe the bug
Downloading any dataset from GLUE has -1 as class labels for test split. Train and validation have regular 0/1 class labels. This is also present in the dataset card online.
### Steps to reproduce the bug
```
dataset = load_dataset("glue", "sst2")
for d in dataset:
# prints out -1
... | lithafnium | https://github.com/huggingface/datasets/issues/5624 | null | false |
1,616,712,665 | 5,623 | Remove set_access_token usage + fail tests if FutureWarning | closed | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-03-09T08:46:01 | 2023-03-09T15:39:00 | 2023-03-09T15:31:59 | `set_access_token` is deprecated and will be removed in `huggingface_hub>=0.14`.
This PR removes it from the tests (it was not used in `datasets` source code itself). FYI, it was not needed since `set_access_token` was just setting git credentials and `datasets` doesn't seem to use git anywhere.
In the future, us... | Wauplin | https://github.com/huggingface/datasets/pull/5623 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5623",
"html_url": "https://github.com/huggingface/datasets/pull/5623",
"diff_url": "https://github.com/huggingface/datasets/pull/5623.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5623.patch",
"merged_at": "2023-03-09T15:31... | true |
1,615,190,942 | 5,622 | Update README template to better template | closed | [
"IMO this template should stay generic.\r\n\r\nAlso, we now use [the card template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md) from `hugginface_hub` as the source of truth on the Hub (you now have the option to import it into the dataset card/READ... | 2023-03-08T12:30:23 | 2023-03-11T05:07:38 | 2023-03-11T05:07:38 | null | emiltj | https://github.com/huggingface/datasets/pull/5622 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5622",
"html_url": "https://github.com/huggingface/datasets/pull/5622",
"diff_url": "https://github.com/huggingface/datasets/pull/5622.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5622.patch",
"merged_at": null
} | true |
1,615,029,615 | 5,621 | Adding Oracle Cloud to docs | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-08T10:22:50 | 2023-03-11T00:57:18 | 2023-03-11T00:49:56 | Adding Oracle Cloud's fsspec implementation to the list of supported cloud storage providers. | ahosler | https://github.com/huggingface/datasets/pull/5621 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5621",
"html_url": "https://github.com/huggingface/datasets/pull/5621",
"diff_url": "https://github.com/huggingface/datasets/pull/5621.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5621.patch",
"merged_at": "2023-03-11T00:49... | true |
1,613,460,520 | 5,620 | Bump pyarrow to 8.0.0 | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-07T13:31:53 | 2023-03-08T14:01:27 | 2023-03-08T13:54:22 | Fix those for Pandas 2.0 (tested [here](https://github.com/huggingface/datasets/actions/runs/4346221280/jobs/7592010397) with pandas==2.0.0.rc0):
```python
=========================== short test summary info ============================
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_parquet_in_memory... | lhoestq | https://github.com/huggingface/datasets/pull/5620 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5620",
"html_url": "https://github.com/huggingface/datasets/pull/5620",
"diff_url": "https://github.com/huggingface/datasets/pull/5620.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5620.patch",
"merged_at": "2023-03-08T13:54... | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.