id int64 599M 3.48B | number int64 1 7.8k | title stringlengths 1 290 | state stringclasses 2
values | comments listlengths 0 30 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-10-05 06:37:50 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-10-05 10:32:43 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-10-01 13:56:03 β | body stringlengths 0 228k β | user stringlengths 3 26 | html_url stringlengths 46 51 | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,406,078,357 | 5,105 | Specifying an exisiting folder in download_and_prepare deletes everything in it | open | [
"cc @lhoestq ",
"Thanks for reporting, @cakiki.\r\n\r\nI would say the deletion of the dir is an expected behavior though...",
"`dask.to_parquet` has an \"overwrite\" parameter and default is `False`, we could also have something similar",
"Thank you both for your feedback!\r\n\r\n@albertvillanova I think I m... | 2022-10-12T11:53:33 | 2022-10-20T11:53:59 | null | ## Describe the bug
The builder correctly creates the `output_dir` folder if it doesn't exist, but if the folder exists everything within it is deleted. Specifying `"."` as the `output_dir` deletes everything in your current dir but also leads to **another bug** whose traceback is the following:
```
... | cakiki | https://github.com/huggingface/datasets/issues/5105 | null | false |
1,405,973,102 | 5,104 | Fix loading how to guide (#5102) | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-12T10:34:42 | 2022-10-12T11:34:07 | 2022-10-12T11:31:55 | null | riccardobucco | https://github.com/huggingface/datasets/pull/5104 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5104",
"html_url": "https://github.com/huggingface/datasets/pull/5104",
"diff_url": "https://github.com/huggingface/datasets/pull/5104.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5104.patch",
"merged_at": "2022-10-12T11:31... | true |
1,405,956,311 | 5,103 | url encode hub url (#5099) | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-12T10:22:12 | 2022-10-12T15:27:24 | 2022-10-12T15:24:47 | null | riccardobucco | https://github.com/huggingface/datasets/pull/5103 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5103",
"html_url": "https://github.com/huggingface/datasets/pull/5103",
"diff_url": "https://github.com/huggingface/datasets/pull/5103.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5103.patch",
"merged_at": "2022-10-12T15:24... | true |
1,404,746,554 | 5,102 | Error in create a dataset from a Python generator | closed | [
"Hi, thanks for reporting! The last line should be `dataset = Dataset.from_generator(my_gen)`.",
"Can I work on this one?"
] | 2022-10-11T14:28:58 | 2022-10-12T11:31:56 | 2022-10-12T11:31:56 | ## Describe the bug
In HOW-TO-GUIDES > Load > [Python generator](https://huggingface.co/docs/datasets/v2.5.2/en/loading#python-generator), the code example defines the `my_gen` function, but when creating the dataset, an undefined `my_dict` is passed in.
```Python
>>> from datasets import Dataset
>>> def my_gen... | yangxuhui | https://github.com/huggingface/datasets/issues/5102 | null | false |
1,404,513,085 | 5,101 | Free the "hf" filesystem protocol for `hffs` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-11T11:57:21 | 2022-10-12T15:32:59 | 2022-10-12T15:30:38 | null | lhoestq | https://github.com/huggingface/datasets/pull/5101 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5101",
"html_url": "https://github.com/huggingface/datasets/pull/5101",
"diff_url": "https://github.com/huggingface/datasets/pull/5101.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5101.patch",
"merged_at": "2022-10-12T15:30... | true |
1,404,458,586 | 5,100 | datasets[s3] sagemaker can't run a model - datasets issue with Value and ClassLabel and cast() method | closed | [] | 2022-10-11T11:16:31 | 2022-10-11T13:48:26 | 2022-10-11T13:48:26 | null | jagochi | https://github.com/huggingface/datasets/issues/5100 | null | false |
1,404,370,191 | 5,099 | datasets doesn't support # in data paths | closed | [
"`datasets` doesn't seem to urlencode the directory names here\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/utils/file_utils.py#L109-L111\r\n\r\nfor example we should have\r\n```python\r\nfrom datasets.utils.file_utils import hf_hub_url\r\n\r\nurl = hf_hu... | 2022-10-11T10:05:32 | 2022-10-13T13:14:20 | 2022-10-13T13:14:20 | ## Describe the bug
dataset files with `#` symbol their paths aren't read correctly.
## Steps to reproduce the bug
The data in folder `c#`of this [dataset](https://huggingface.co/datasets/loubnabnl/bigcode_csharp) can't be loaded. While the folder `c_sharp` with the same data is loaded properly
```python
ds = lo... | loubnabnl | https://github.com/huggingface/datasets/issues/5099 | null | false |
1,404,058,518 | 5,098 | Classes label error when loading symbolic links using imagefolder | closed | [
"It can be solved temporarily by remove `resolve` in \r\nhttps://github.com/huggingface/datasets/blob/bef23be3d9543b1ca2da87ab2f05070201044ddc/src/datasets/data_files.py#L278",
"Hi, thanks for reporting and suggesting a fix! We still need to account for `.`/`..` in the file path, so a more robust fix would be `P... | 2022-10-11T06:10:58 | 2022-11-14T14:40:20 | 2022-11-14T14:40:20 | **Is your feature request related to a problem? Please describe.**
Like this: #4015
When there are **symbolic links** to pictures in the data folder, the parent folder name of the **real file** will be used as the class name instead of the parent folder of the symbolic link itself. Can you give an option to decide wh... | horizon86 | https://github.com/huggingface/datasets/issues/5098 | null | false |
1,403,679,353 | 5,097 | Fatal error with pyarrow/libarrow.so | closed | [
"Thanks for reporting, @catalys1.\r\n\r\nThis seems a duplicate of:\r\n- #3310 \r\n\r\nThe source of the problem is in PyArrow:\r\n- [ARROW-15141: [C++] Fatal error condition occurred in aws_thread_launch](https://issues.apache.org/jira/browse/ARROW-15141)\r\n- [ARROW-17501: [C++] Fatal error condition occurred in ... | 2022-10-10T20:29:04 | 2022-10-11T06:56:01 | 2022-10-11T06:56:00 | ## Describe the bug
When using datasets, at the very end of my jobs the program crashes (see trace below).
It doesn't seem to affect anything, as it appears to happen as the program is closing down. Just importing `datasets` is enough to cause the error.
## Steps to reproduce the bug
This is sufficient to reprodu... | catalys1 | https://github.com/huggingface/datasets/issues/5097 | null | false |
1,403,379,816 | 5,096 | Transfer some canonical datasets under an organization namespace | closed | [
"The transfer of the dummy dataset to the dummy org works as expected:\r\n```python\r\nIn [1]: from datasets import load_dataset; ds = load_dataset(\"dummy_canonical_dataset\", download_mode=\"force_redownload\"); ds\r\nDownloading builder script: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ... | 2022-10-10T15:44:31 | 2024-06-24T06:06:28 | 2024-06-24T06:02:45 | As discussed during our @huggingface/datasets meeting, we are planning to move some "canonical" dataset scripts under their corresponding organization namespace (if this does not exist).
On the contrary, if the dataset already exists under the organization namespace, we are deprecating the canonical one (and eventua... | albertvillanova | https://github.com/huggingface/datasets/issues/5096 | null | false |
1,403,221,408 | 5,095 | Fix tutorial (#5093) | closed | [
"Oops I merged without linking to the hacktoberfest issue - not sure if it counts in this case\r\n\r\nsorry about that..\r\n\r\nNext time you can just mention \"Close #XXXX\" in your issue to link it",
"It should :) (the `hacktoberfest` repo topic is all that matters)"
] | 2022-10-10T13:55:15 | 2022-10-10T17:50:52 | 2022-10-10T15:32:20 | Close #5093 | riccardobucco | https://github.com/huggingface/datasets/pull/5095 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5095",
"html_url": "https://github.com/huggingface/datasets/pull/5095",
"diff_url": "https://github.com/huggingface/datasets/pull/5095.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5095.patch",
"merged_at": "2022-10-10T15:32... | true |
1,403,214,950 | 5,094 | Multiprocessing with `Dataset.map` and `PyTorch` results in deadlock | closed | [
"Hi ! Could it be an Out of Memory issue that could have killed one of the processes ? can you check your memory ?",
"Hi! I don't think it is a memory issue. I'm monitoring the main and spawn python processes and threads with `htop` and the memory does not peak. Besides, the example I've posted above should not b... | 2022-10-10T13:50:56 | 2023-07-24T15:29:13 | 2023-07-24T15:29:13 | ## Describe the bug
There seems to be an issue with using multiprocessing with `datasets.Dataset.map` (i.e. setting `num_proc` to a value greater than one) combined with a function that uses `torch` under the hood. The subprocesses that `datasets.Dataset.map` spawns [a this step](https://github.com/huggingface/datase... | RR-28023 | https://github.com/huggingface/datasets/issues/5094 | null | false |
1,402,939,660 | 5,093 | Mismatch between tutoriel and doc | closed | [
"Hi, thanks for reporting! This line should be replaced with \r\n```python\r\ndataset = dataset.map(lambda examples: tokenizer(examples[\"text\"], return_tensors=\"np\"), batched=True)\r\n```\r\nfor it to work (the `return_tensors` part inside the `tokenizer` call).",
"Can I work on this?",
"Fixed in https://gi... | 2022-10-10T10:23:53 | 2022-10-10T17:51:15 | 2022-10-10T17:51:14 | ## Describe the bug
In the "Process text data" tutorial, [`map` has `return_tensors` as kwarg](https://huggingface.co/docs/datasets/main/en/nlp_process#map). It does not seem to appear in the [function documentation](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map), nor... | clefourrier | https://github.com/huggingface/datasets/issues/5093 | null | false |
1,402,713,517 | 5,092 | Use HTML relative paths for tiles in the docs | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Good catch, @lewtun. Thanks for the fix.\r\n> \r\n> Do you know if there are other absolute paths in the docs that should be fixed as well?\r\n\r\nI found a few more in [0d4796b](https://github.com/huggingface/datasets/pull/5092/co... | 2022-10-10T07:24:27 | 2022-10-11T13:25:45 | 2022-10-11T13:23:23 | This PR replaces the absolute paths in the landing page tiles with relative ones so that one can test navigation both locally in and in future PRs (see [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5084/en/index) for an example PR where the links don't work).
I encountered this while working on the `op... | lewtun | https://github.com/huggingface/datasets/pull/5092 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5092",
"html_url": "https://github.com/huggingface/datasets/pull/5092",
"diff_url": "https://github.com/huggingface/datasets/pull/5092.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5092.patch",
"merged_at": "2022-10-11T13:23... | true |
1,401,112,552 | 5,091 | Allow connection objects in `from_sql` + small doc improvement | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-07T12:39:44 | 2022-10-09T13:19:15 | 2022-10-09T13:16:57 | Allow connection objects in `from_sql` (emit a warning that they are cachable) and add a tip that explains the format of the con parameter when provided as a URI string.
PS: ~~This PR contains a parameter link, so https://github.com/huggingface/doc-builder/pull/311 needs to be merged before it's "ready for review".~... | mariosasko | https://github.com/huggingface/datasets/pull/5091 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5091",
"html_url": "https://github.com/huggingface/datasets/pull/5091",
"diff_url": "https://github.com/huggingface/datasets/pull/5091.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5091.patch",
"merged_at": "2022-10-09T13:16... | true |
1,401,102,407 | 5,090 | Review sync issues from GitHub to Hub | closed | [
"Nice!!"
] | 2022-10-07T12:31:56 | 2022-10-08T07:07:36 | 2022-10-08T07:07:36 | ## Describe the bug
We have discovered that sometimes there were sync issues between GitHub and Hub datasets, after a merge commit to main branch.
For example:
- this merge commit: https://github.com/huggingface/datasets/commit/d74a9e8e4bfff1fed03a4cab99180a841d7caf4b
- was not properly synced with the Hub: https... | albertvillanova | https://github.com/huggingface/datasets/issues/5090 | null | false |
1,400,788,486 | 5,089 | Resume failed process | open | [] | 2022-10-07T08:07:03 | 2022-10-07T08:07:03 | null | **Is your feature request related to a problem? Please describe.**
When a process (`map`, `filter`, etc.) crashes part-way through, you lose all progress.
**Describe the solution you'd like**
It would be good if the cache reflected the partial progress, so that after we restart the script, the process can restart ... | felix-schneider | https://github.com/huggingface/datasets/issues/5089 | null | false |
1,400,530,412 | 5,088 | load_datasets("json", ...) don't read local .json.gz properly | open | [
"Hi @junwang-wish, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce the bug. Which version of `datasets` are you using? Does the problem persist if you update `datasets`?\r\n```shell\r\npip install -U datasets\r\n``` ",
"Thanks @albertvillanova I updated `datasets` from `2.5.1` to `2.5.2` and... | 2022-10-07T02:16:58 | 2022-10-07T14:43:16 | null | ## Describe the bug
I have a local file `*.json.gz` and it can be read by `pandas.read_json(lines=True)`, but cannot be read by `load_datasets("json")` (resulting in 0 lines)
## Steps to reproduce the bug
```python
fpath = '/data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz'
ds_panda = Da... | junwang-wish | https://github.com/huggingface/datasets/issues/5088 | null | false |
1,400,487,967 | 5,087 | Fix filter with empty indices | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-07T01:07:00 | 2022-10-07T18:43:03 | 2022-10-07T18:40:26 | Fix #5085 | Mouhanedg56 | https://github.com/huggingface/datasets/pull/5087 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5087",
"html_url": "https://github.com/huggingface/datasets/pull/5087",
"diff_url": "https://github.com/huggingface/datasets/pull/5087.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5087.patch",
"merged_at": "2022-10-07T18:40... | true |
1,400,216,975 | 5,086 | HTTPError: 404 Client Error: Not Found for url | closed | [
"FYI @lewtun ",
"Hi @km5ar, thanks for reporting.\r\n\r\nThis should be fixed in the notebook:\r\n- the filename `datasets-issues-with-hf-doc-builder.jsonl` no longer exists on the repo; instead, current filename is `datasets-issues-with-comments.jsonl`\r\n- see: https://huggingface.co/datasets/lewtun/github-issu... | 2022-10-06T19:48:58 | 2022-10-07T15:12:01 | 2022-10-07T15:12:01 | ## Describe the bug
I was following chap 5 from huggingface course: https://huggingface.co/course/chapter5/6?fw=tf
However, I'm not able to download the datasets, with a 404 erros
<img width="1160" alt="iShot2022-10-06_15 54 50" src="https://user-images.githubusercontent.com/54015474/194406327-ae62c2f3-1da5-... | keyuchen21 | https://github.com/huggingface/datasets/issues/5086 | null | false |
1,400,113,569 | 5,085 | Filtering on an empty dataset returns a corrupted dataset. | closed | [
"~~It seems like #5043 fix (merged recently) is the root cause of such behaviour. When we empty indices mapping (because the dataset length equals to zero), we can no longer get column item like: `ds_filter_2['sentence']` which uses\r\n`ds_filter_1._indices.column(0)`~~\r\n\r\n**UPDATE:**\r\nEmpty datasets are retu... | 2022-10-06T18:18:49 | 2022-10-07T19:06:02 | 2022-10-07T18:40:26 | ## Describe the bug
When filtering a dataset twice, where the first result is an empty dataset, the second dataset seems corrupted.
## Steps to reproduce the bug
```python
datasets = load_dataset("glue", "sst2")
dataset_split = datasets['validation']
ds_filter_1 = dataset_split.filter(lambda x: False) # ... | gabegma | https://github.com/huggingface/datasets/issues/5085 | null | false |
1,400,016,229 | 5,084 | IterableDataset formatting in numpy/torch/tf/jax | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5084). All of your documentation changes will be reflected on that endpoint.",
"Actually I'm not happy with this implementation. It always require the iterable dataset to have definite `features`, which removes a lot of flexibi... | 2022-10-06T16:53:38 | 2023-09-24T10:06:51 | 2022-12-20T17:19:52 | This code now returns a numpy array:
```python
from datasets import load_dataset
ds = load_dataset("imagenet-1k", split="train", streaming=True).with_format("np")
print(next(iter(ds))["image"])
```
It also works with "arrow", "pandas", "torch", "tf" and "jax"
### Implementation details:
I'm using the ex... | lhoestq | https://github.com/huggingface/datasets/pull/5084 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5084",
"html_url": "https://github.com/huggingface/datasets/pull/5084",
"diff_url": "https://github.com/huggingface/datasets/pull/5084.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5084.patch",
"merged_at": null
} | true |
1,399,842,514 | 5,083 | Support numpy/torch/tf/jax formatting for IterableDataset | closed | [
"hii @lhoestq, can you assign this issue to me? Though i am new to open source still I would love to put my best foot forward. I can see there isn't anyone right now assigned to this issue.",
"Hi @zutarich ! This issue was fixed by #5852 - sorry I forgot to close it\r\n\r\nFeel free to look for other issues and p... | 2022-10-06T15:14:58 | 2023-10-09T12:42:15 | 2023-10-09T12:42:15 | Right now `IterableDataset` doesn't do any formatting.
In particular this code should return a numpy array:
```python
from datasets import load_dataset
ds = load_dataset("imagenet-1k", split="train", streaming=True).with_format("np")
print(next(iter(ds))["image"])
```
Right now it returns a PIL.Image.
S... | lhoestq | https://github.com/huggingface/datasets/issues/5083 | null | false |
1,399,379,777 | 5,082 | adding keep in memory | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @mariosasko , I have added a test for the `keep_in_memory` version. I have also removed the `Compatible with temp_seed` part in the scope of `dset_shuffled`, please verify if that makes sense."
] | 2022-10-06T11:10:46 | 2022-10-07T14:35:34 | 2022-10-07T14:32:54 | Fixing #514 .
Hello @mariosasko π, I have implemented what you have recommanded to fix the keep in memory problem for shuffle on the issue #514 . | Mustapha-AJEGHRIR | https://github.com/huggingface/datasets/pull/5082 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5082",
"html_url": "https://github.com/huggingface/datasets/pull/5082",
"diff_url": "https://github.com/huggingface/datasets/pull/5082.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5082.patch",
"merged_at": "2022-10-07T14:32... | true |
1,399,340,050 | 5,081 | Bug loading `sentence-transformers/parallel-sentences` | open | [
"tagging @nreimers ",
"The dataset is sadly not really compatible to be loaded with `load_dataset`. So far it is better to git clone it and to use the files directly.\r\n\r\nA data loading script would be needed to be added to this dataset. But this was too much overhead / not really intuitive how to create it.",... | 2022-10-06T10:47:51 | 2022-10-11T10:00:48 | null | ## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("sentence-transformers/parallel-sentences")
```
raises this:
```
/home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the '... | PhilipMay | https://github.com/huggingface/datasets/issues/5081 | null | false |
1,398,849,565 | 5,080 | Use hfh for caching | open | [
"There is some discussion in https://github.com/huggingface/huggingface_hub/pull/1088 if it can help :)"
] | 2022-10-06T05:51:58 | 2022-10-06T14:26:05 | null | ## Is your feature request related to a problem?
As previously discussed in our meeting with @Wauplin and agreed on our last datasets team sync meeting, I'm investigating how `datasets` can use `hfh` for caching.
## Describe the solution you'd like
Due to the peculiarities of the `datasets` cache, I would prop... | albertvillanova | https://github.com/huggingface/datasets/issues/5080 | null | false |
1,398,609,305 | 5,079 | refactor: replace AssertionError with more meaningful exceptions (#5074) | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-06T01:39:35 | 2022-10-07T14:35:43 | 2022-10-07T14:33:10 | Closes #5074
Replaces `AssertionError` in the following files with more descriptive exceptions:
- `src/datasets/arrow_reader.py`
- `src/datasets/builder.py`
- `src/datasets/utils/version.py`
The issue listed more files that needed to be fixed, but the rest of them were contained in the top-level `datasets` d... | galbwe | https://github.com/huggingface/datasets/pull/5079 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5079",
"html_url": "https://github.com/huggingface/datasets/pull/5079",
"diff_url": "https://github.com/huggingface/datasets/pull/5079.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5079.patch",
"merged_at": "2022-10-07T14:33... | true |
1,398,335,148 | 5,078 | Fix header level in Audio docs | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-05T20:22:44 | 2022-10-06T08:12:23 | 2022-10-06T08:09:41 | Fixes header level so `Dataset features` is the doc title instead of `The Audio type`:
 | stevhliu | https://github.com/huggingface/datasets/pull/5078 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5078",
"html_url": "https://github.com/huggingface/datasets/pull/5078",
"diff_url": "https://github.com/huggingface/datasets/pull/5078.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5078.patch",
"merged_at": "2022-10-06T08:09... | true |
1,398,080,859 | 5,077 | Fix passed download_config in HubDatasetModuleFactoryWithoutScript | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-05T16:42:36 | 2022-10-06T05:31:22 | 2022-10-06T05:29:06 | Fix passed `download_config` in `HubDatasetModuleFactoryWithoutScript`. | albertvillanova | https://github.com/huggingface/datasets/pull/5077 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5077",
"html_url": "https://github.com/huggingface/datasets/pull/5077",
"diff_url": "https://github.com/huggingface/datasets/pull/5077.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5077.patch",
"merged_at": "2022-10-06T05:29... | true |
1,397,918,092 | 5,076 | fix: update exception throw from OSError to EnvironmentError in `push⦠| closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-05T14:46:29 | 2022-10-07T14:35:57 | 2022-10-07T14:33:27 | Status:
Ready for review
Description of Changes:
Fixes #5075
Changes proposed in this pull request:
- Throw EnvironmentError instead of OSError in `push_to_hub` when the Hub token is not present. | rahulXs | https://github.com/huggingface/datasets/pull/5076 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5076",
"html_url": "https://github.com/huggingface/datasets/pull/5076",
"diff_url": "https://github.com/huggingface/datasets/pull/5076.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5076.patch",
"merged_at": "2022-10-07T14:33... | true |
1,397,865,501 | 5,075 | Throw EnvironmentError when token is not present | closed | [
"@mariosasko I've raised a PR #5076 against this issue. Please help to review. Thanks."
] | 2022-10-05T14:14:18 | 2022-10-07T14:33:28 | 2022-10-07T14:33:28 | Throw EnvironmentError instead of OSError ([link](https://github.com/huggingface/datasets/blob/6ad430ba0cdeeb601170f732d4bd977f5c04594d/src/datasets/arrow_dataset.py#L4306) to the line) in `push_to_hub` when the Hub token is not present. | mariosasko | https://github.com/huggingface/datasets/issues/5075 | null | false |
1,397,850,352 | 5,074 | Replace AssertionErrors with more meaningful errors | closed | [
"Hi, can I pick up this issue?",
"#self-assign",
"Looks like the top-level `datasource` directory was removed when https://github.com/huggingface/datasets/pull/4974 was merged, so there are 3 source files to fix."
] | 2022-10-05T14:03:55 | 2022-10-07T14:33:11 | 2022-10-07T14:33:11 | Replace the AssertionErrors with more meaningful errors such as ValueError, TypeError, etc.
The files with AssertionErrors that need to be replaced:
```
src/datasets/arrow_reader.py
src/datasets/builder.py
src/datasets/utils/version.py
``` | mariosasko | https://github.com/huggingface/datasets/issues/5074 | null | false |
1,397,832,183 | 5,073 | Restore saved format state in `load_from_disk` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-05T13:51:47 | 2022-10-11T16:55:07 | 2022-10-11T16:49:23 | Hello! @mariosasko
This pull request relates to issue #5050 and intends to add the format to datasets loaded from disk.
All I did was add a set_format in the Dataset.load_from_disk, as DatasetDict.load_from_disk relies on the first.
I don't know if I should add a test and where, so let me know if I should and ... | asofiaoliveira | https://github.com/huggingface/datasets/pull/5073 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5073",
"html_url": "https://github.com/huggingface/datasets/pull/5073",
"diff_url": "https://github.com/huggingface/datasets/pull/5073.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5073.patch",
"merged_at": "2022-10-11T16:49... | true |
1,397,765,531 | 5,072 | Image & Audio formatting for numpy/torch/tf/jax | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I just added a consolidation step so that numpy arrays or tensors of images are stacked together if the shapes match, instead of having lists of tensors\r\n\r\nFeel free to review @mariosasko :)",
"I added a few lines in the docs a... | 2022-10-05T13:07:03 | 2022-10-10T13:24:10 | 2022-10-10T13:21:32 | Added support for image and audio formatting for numpy, torch, tf and jax.
For images, the dtype used is the one of the image (the one returned by PIL.Image), e.g. uint8
I also added support for string, binary and None types. In particular for torch and jax, strings are kept unchanged (previously it was returning... | lhoestq | https://github.com/huggingface/datasets/pull/5072 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5072",
"html_url": "https://github.com/huggingface/datasets/pull/5072",
"diff_url": "https://github.com/huggingface/datasets/pull/5072.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5072.patch",
"merged_at": "2022-10-10T13:21... | true |
1,397,301,270 | 5,071 | Support DEFAULT_CONFIG_NAME when no BUILDER_CONFIGS | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Super, thanks a lot for adding this support, Albert!"
] | 2022-10-05T06:28:39 | 2022-10-06T14:43:12 | 2022-10-06T14:40:26 | This PR supports defining a default config name, even if no predefined allowed config names are set.
Fix #5070.
CC: @stas00 | albertvillanova | https://github.com/huggingface/datasets/pull/5071 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5071",
"html_url": "https://github.com/huggingface/datasets/pull/5071",
"diff_url": "https://github.com/huggingface/datasets/pull/5071.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5071.patch",
"merged_at": "2022-10-06T14:40... | true |
1,396,765,647 | 5,070 | Support default config name when no builder configs | closed | [
"Thank you for creating this feature request, Albert.\r\n\r\nFor context this is the datatest where Albert has been helping me to switch to on-the-fly split config https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing\r\n\r\nand the attempt to switch on-the-fly splits was here: https://huggingface.co/... | 2022-10-04T19:49:35 | 2022-10-06T14:40:26 | 2022-10-06T14:40:26 | **Is your feature request related to a problem? Please describe.**
As discussed with @stas00, we could support defining a default config name, even if no predefined allowed config names are set. That is, support `DEFAULT_CONFIG_NAME`, even when `BUILDER_CONFIGS` is not defined.
**Additional context**
In order to ... | albertvillanova | https://github.com/huggingface/datasets/issues/5070 | null | false |
1,396,361,768 | 5,067 | Fix CONTRIBUTING once dataset scripts transferred to Hub | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-04T14:16:05 | 2022-10-06T06:14:43 | 2022-10-06T06:12:12 | This PR updates the `CONTRIBUTING.md` guide, once the all dataset scripts have been removed from the GitHub repo and transferred to the HF Hub:
- #4974
See diff here: https://github.com/huggingface/datasets/commit/e3291ecff9e54f09fcee3f313f051a03fdc3d94b
Additionally, this PR fixes the line separator that by som... | albertvillanova | https://github.com/huggingface/datasets/pull/5067 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5067",
"html_url": "https://github.com/huggingface/datasets/pull/5067",
"diff_url": "https://github.com/huggingface/datasets/pull/5067.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5067.patch",
"merged_at": "2022-10-06T06:12... | true |
1,396,086,745 | 5,066 | Support streaming gzip.open | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-04T11:20:05 | 2022-10-06T15:13:51 | 2022-10-06T15:11:29 | This PR implements support for streaming out-of-the-box dataset scripts containing `gzip.open`.
This has been a recurring issue. See, e.g.:
- #5060
- #3191 | albertvillanova | https://github.com/huggingface/datasets/pull/5066 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5066",
"html_url": "https://github.com/huggingface/datasets/pull/5066",
"diff_url": "https://github.com/huggingface/datasets/pull/5066.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5066.patch",
"merged_at": "2022-10-06T15:11... | true |
1,396,003,362 | 5,065 | Ci py3.10 | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Does it sound good to you @albertvillanova ?"
] | 2022-10-04T10:13:51 | 2022-11-29T15:28:05 | 2022-11-29T15:25:26 | Added a CI job for python 3.10
Some dependencies don't work on 3.10 like apache beam, so I remove them from the extras in this case.
I also removed some s3 fixtures that we don't use anymore (and that don't work on 3.10 anyway) | lhoestq | https://github.com/huggingface/datasets/pull/5065 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5065",
"html_url": "https://github.com/huggingface/datasets/pull/5065",
"diff_url": "https://github.com/huggingface/datasets/pull/5065.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5065.patch",
"merged_at": "2022-11-29T15:25... | true |
1,395,978,143 | 5,064 | Align signature of create/delete_repo with latest hfh | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-04T09:54:53 | 2022-10-07T17:02:11 | 2022-10-07T16:59:30 | This PR aligns the signature of `create_repo`/`delete_repo` with the current one in hfh, by removing deprecated `name` and `organization`, and using `repo_id` instead.
Related to:
- #5063
CC: @lhoestq | albertvillanova | https://github.com/huggingface/datasets/pull/5064 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5064",
"html_url": "https://github.com/huggingface/datasets/pull/5064",
"diff_url": "https://github.com/huggingface/datasets/pull/5064.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5064.patch",
"merged_at": "2022-10-07T16:59... | true |
1,395,895,463 | 5,063 | Align signature of list_repo_files with latest hfh | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-04T08:51:46 | 2022-10-07T16:42:57 | 2022-10-07T16:40:16 | This PR aligns the signature of `list_repo_files` with the current one in `hfh`, by renaming deprecated `token` to `use_auth_token`.
This is already the case for `dataset_info`.
CC: @lhoestq | albertvillanova | https://github.com/huggingface/datasets/pull/5063 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5063",
"html_url": "https://github.com/huggingface/datasets/pull/5063",
"diff_url": "https://github.com/huggingface/datasets/pull/5063.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5063.patch",
"merged_at": "2022-10-07T16:40... | true |
1,395,739,417 | 5,062 | Fix CI hfh token warning | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"good catch !"
] | 2022-10-04T06:36:54 | 2022-10-04T08:58:15 | 2022-10-04T08:42:31 | In our CI, we get warnings from `hfh` about using deprecated `token`: https://github.com/huggingface/datasets/actions/runs/3174626525/jobs/5171672431
```
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_private
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub
tests/te... | albertvillanova | https://github.com/huggingface/datasets/pull/5062 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5062",
"html_url": "https://github.com/huggingface/datasets/pull/5062",
"diff_url": "https://github.com/huggingface/datasets/pull/5062.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5062.patch",
"merged_at": "2022-10-04T08:42... | true |
1,395,476,770 | 5,061 | `_pickle.PicklingError: logger cannot be pickled` in multiprocessing `map` | closed | [
"This is maybe related to python 3.10, do you think you could try on 3.8 ?\r\n\r\nIn the meantime we'll keep improving the support for 3.10. Let me add a dedicated CI",
"I did some binary search and seems like the root cause is either `multiprocess` or `dill`. python 3.10 is fine. Specifically:\r\n- `multiprocess... | 2022-10-03T23:51:38 | 2023-07-21T14:43:35 | 2023-07-21T14:43:34 | ## Describe the bug
When I `map` with multiple processes, this error occurs. The `.name` of the `logger` that fails to pickle in the final line is `datasets.fingerprint`.
```
File "~/project/dataset.py", line 204, in <dictcomp>
split: dataset.map(
File ".../site-packages/datasets/arrow_dataset.py", line 24... | ZhaofengWu | https://github.com/huggingface/datasets/issues/5061 | null | false |
1,395,382,940 | 5,060 | Unable to Use Custom Dataset Locally | closed | [
"Hi ! I opened a PR in your repo to fix this :)\r\nhttps://huggingface.co/datasets/zpn/pubchem_selfies/discussions/7\r\n\r\nbasically you need to use `open` for streaming to work properly",
"Thank you so much for this! Naive question, is this a feature of `open` or have you all overloaded it to be able to read fr... | 2022-10-03T21:55:16 | 2022-10-06T14:29:18 | 2022-10-06T14:29:17 | ## Describe the bug
I have uploaded a [dataset](https://huggingface.co/datasets/zpn/pubchem_selfies) and followed the instructions from the [dataset_loader](https://huggingface.co/docs/datasets/dataset_script#download-data-files-and-organize-splits) tutorial. In that tutorial, it says
```
If the data files live in ... | zanussbaum | https://github.com/huggingface/datasets/issues/5060 | null | false |
1,395,050,876 | 5,059 | Fix typo | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-03T17:05:25 | 2022-10-03T17:34:40 | 2022-10-03T17:32:27 | Fixes a small typo :) | stevhliu | https://github.com/huggingface/datasets/pull/5059 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5059",
"html_url": "https://github.com/huggingface/datasets/pull/5059",
"diff_url": "https://github.com/huggingface/datasets/pull/5059.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5059.patch",
"merged_at": "2022-10-03T17:32... | true |
1,394,962,424 | 5,058 | Mark CI tests as xfail when 502 error | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-03T15:53:55 | 2022-10-04T10:03:23 | 2022-10-04T10:01:23 | To make CI more robust, we could mark as xfail when the Hub raises a 502 error (besides 500 error):
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_skip_identical_files
- https://github.com/huggingface/datasets/actions/runs/3174626525/jobs/5171672431
```
> raise HTTPEr... | albertvillanova | https://github.com/huggingface/datasets/pull/5058 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5058",
"html_url": "https://github.com/huggingface/datasets/pull/5058",
"diff_url": "https://github.com/huggingface/datasets/pull/5058.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5058.patch",
"merged_at": "2022-10-04T10:01... | true |
1,394,827,216 | 5,057 | Support `converters` in `CsvBuilder` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-03T14:23:21 | 2022-10-04T11:19:28 | 2022-10-04T11:17:32 | Add the `converters` param to `CsvBuilder`, to help in situations like [this one](https://discuss.huggingface.co/t/typeerror-in-load-dataset-related-to-a-sequence-of-strings/23545).
| mariosasko | https://github.com/huggingface/datasets/pull/5057 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5057",
"html_url": "https://github.com/huggingface/datasets/pull/5057",
"diff_url": "https://github.com/huggingface/datasets/pull/5057.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5057.patch",
"merged_at": "2022-10-04T11:17... | true |
1,394,713,173 | 5,056 | Fix broken URL's (GEM) | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5056). All of your documentation changes will be reflected on that endpoint.",
"Thanks, @manandey. We have removed all dataset scripts from this repo. Subsequent PRs should be opened directly on the Hugging Face Hub."
] | 2022-10-03T13:13:22 | 2022-10-04T13:49:00 | 2022-10-04T13:48:59 | This PR fixes the broken URL's in GEM. cc. @lhoestq, @albertvillanova | manandey | https://github.com/huggingface/datasets/pull/5056 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5056",
"html_url": "https://github.com/huggingface/datasets/pull/5056",
"diff_url": "https://github.com/huggingface/datasets/pull/5056.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5056.patch",
"merged_at": null
} | true |
1,394,503,844 | 5,055 | Fix backward compatibility for dataset_infos.json | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-03T10:30:14 | 2022-10-03T13:43:55 | 2022-10-03T13:41:32 | While working on https://github.com/huggingface/datasets/pull/5018 I noticed a small bug introduced in #4926 regarding backward compatibility for dataset_infos.json
Indeed, when a dataset repo had both dataset_infos.json and README.md, the JSON file was ignored. This is unexpected: in practice it should be ignored o... | lhoestq | https://github.com/huggingface/datasets/pull/5055 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5055",
"html_url": "https://github.com/huggingface/datasets/pull/5055",
"diff_url": "https://github.com/huggingface/datasets/pull/5055.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5055.patch",
"merged_at": "2022-10-03T13:41... | true |
1,394,152,728 | 5,054 | Fix license/citation information of squadshifts dataset card | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-03T05:19:13 | 2022-10-03T09:26:49 | 2022-10-03T09:24:30 | This PR fixes the license/citation information of squadshifts dataset card, once the dataset owners have responded to our request for information:
- https://github.com/modestyachts/squadshifts-website/issues/1
Additionally, we have updated the mention in their website to our `datasets` library (they were referring ... | albertvillanova | https://github.com/huggingface/datasets/pull/5054 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5054",
"html_url": "https://github.com/huggingface/datasets/pull/5054",
"diff_url": "https://github.com/huggingface/datasets/pull/5054.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5054.patch",
"merged_at": "2022-10-03T09:24... | true |
1,393,739,882 | 5,053 | Intermittent JSON parse error when streaming the Pile | open | [
"Maybe #2838 can help. In this PR we allow to skip bad chunks of JSON data to not crash the training\r\n\r\nDid you have warning messages before the error ?\r\n\r\nsomething like this maybe ?\r\n```\r\n03/24/2022 02:19:46 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host... | 2022-10-02T11:56:46 | 2022-10-04T17:59:03 | null | ## Describe the bug
I have an intermittent error when streaming the Pile, where I get a JSON parse error which causes my program to crash.
This is intermittent - when I rerun the program with the same random seed it does not crash in the same way. The exact point this happens also varied - it happened to me 11B tok... | neelnanda-io | https://github.com/huggingface/datasets/issues/5053 | null | false |
1,393,076,765 | 5,052 | added from_generator method to IterableDataset class. | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I added a test and moved the `streaming` param from `read` to `__init_`. Then, I also decided to update the `read` method of the rest of the packaged modules to account for this param. \r\n\r\n@hamid-vakilzadeh Are you OK with these ... | 2022-09-30T22:14:05 | 2022-10-05T12:51:48 | 2022-10-05T12:10:48 | Hello,
This resolves issues #4988.
I added a method `from_generator` to class `IterableDataset`.
I modified the `read` method of input stream generator to also return Iterable_dataset.
| hamid-vakilzadeh | https://github.com/huggingface/datasets/pull/5052 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5052",
"html_url": "https://github.com/huggingface/datasets/pull/5052",
"diff_url": "https://github.com/huggingface/datasets/pull/5052.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5052.patch",
"merged_at": "2022-10-05T12:10... | true |
1,392,559,503 | 5,051 | Revert task removal in folder-based builders | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-30T14:50:03 | 2022-10-03T12:23:35 | 2022-10-03T12:21:31 | Reverts the removal of `task_templates` in the folder-based builders. I also added the `AudioClassifaction` task for consistency.
This is needed to fix https://github.com/huggingface/transformers/issues/19177.
I think we should soon deprecate and remove the current task API (and investigate if it's possible to in... | mariosasko | https://github.com/huggingface/datasets/pull/5051 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5051",
"html_url": "https://github.com/huggingface/datasets/pull/5051",
"diff_url": "https://github.com/huggingface/datasets/pull/5051.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5051.patch",
"merged_at": "2022-10-03T12:21... | true |
1,392,381,882 | 5,050 | Restore saved format state in `load_from_disk` | closed | [
"Hi, can I work on this?",
"Hi, sure! Let us know if you need some pointers/help."
] | 2022-09-30T12:40:07 | 2022-10-11T16:49:24 | 2022-10-11T16:49:24 | Even though we save the `format` state in `save_to_disk`, we don't restore it in `load_from_disk`. We should fix that.
Reported here: https://discuss.huggingface.co/t/save-to-disk-loses-formatting-information/23815 | mariosasko | https://github.com/huggingface/datasets/issues/5050 | null | false |
1,392,361,381 | 5,049 | Add `kwargs` to `Dataset.from_generator` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-30T12:24:27 | 2022-10-03T11:00:11 | 2022-10-03T10:58:15 | Add the `kwargs` param to `from_generator` to align it with the rest of the `from_` methods (this param allows passing custom `writer_batch_size` for instance). | mariosasko | https://github.com/huggingface/datasets/pull/5049 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5049",
"html_url": "https://github.com/huggingface/datasets/pull/5049",
"diff_url": "https://github.com/huggingface/datasets/pull/5049.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5049.patch",
"merged_at": "2022-10-03T10:58... | true |
1,392,170,680 | 5,048 | Fix bug with labels of eurlex config of lex_glue dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@JamesLYC88 here is the fix! Thanks again!",
"Thanks, @albertvillanova. When do you expect that this change will take effect when someone downloads the dataset?",
"The change is immediately available now, since this change we mad... | 2022-09-30T09:47:12 | 2022-09-30T16:30:25 | 2022-09-30T16:21:41 | Fix for a critical bug in the EURLEX dataset label list to make LexGLUE EURLEX results replicable.
In LexGLUE (Chalkidis et al., 2022), the following is mentioned w.r.t. EUR-LEX: _"It supports four different label granularities, comprising 21, 127, 567, 7390 EuroVoc concepts, respectively. We use the 100 most frequ... | iliaschalkidis | https://github.com/huggingface/datasets/pull/5048 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5048",
"html_url": "https://github.com/huggingface/datasets/pull/5048",
"diff_url": "https://github.com/huggingface/datasets/pull/5048.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5048.patch",
"merged_at": "2022-09-30T16:21... | true |
1,392,088,398 | 5,047 | Fix cats_vs_dogs | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-30T08:47:29 | 2022-09-30T10:23:22 | 2022-09-30T09:34:28 | Reported in https://github.com/huggingface/datasets/pull/3878
I updated the number of examples | lhoestq | https://github.com/huggingface/datasets/pull/5047 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5047",
"html_url": "https://github.com/huggingface/datasets/pull/5047",
"diff_url": "https://github.com/huggingface/datasets/pull/5047.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5047.patch",
"merged_at": "2022-09-30T09:34... | true |
1,391,372,519 | 5,046 | Audiofolder creates empty Dataset if files same level as metadata | closed | [
"Hi! Unfortunately, I can't reproduce this behavior. Instead, I get `ValueError: audio at 2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav doesn't have metadata in /audio-data/metadata.csv`, which can be fixed by removing the `./` from the file name.\r\n\r\n(Link to a Colab that tries to reproduce this behavior: htt... | 2022-09-29T19:17:23 | 2022-10-28T13:05:07 | 2022-10-28T13:05:07 | ## Describe the bug
When audio files are at the same level as the metadata (`metadata.csv` or `metadata.jsonl` ), the `load_dataset` returns a `DatasetDict` with no rows but the correct columns.
https://github.com/huggingface/datasets/blob/1ea4d091b7a4b83a85b2eeb8df65115d39af3766/docs/source/audio_dataset.mdx?plain... | msis | https://github.com/huggingface/datasets/issues/5046 | null | false |
1,391,287,609 | 5,045 | Automatically revert to last successful commit to hub when a push_to_hub is interrupted | closed | [
"Could you share the error you got please ? Maybe the full stack trace if you have it ?\r\n\r\nMaybe `push_to_hub` be implemented as a single commit @Wauplin ? This way if it fails, the repo is still at the previous (valid) state instead of ending-up in an invalid/incimplete state.",
"> Maybe push_to_hub be imple... | 2022-09-29T18:08:12 | 2023-10-16T13:30:49 | 2023-10-16T13:30:49 | **Is your feature request related to a problem? Please describe.**
I pushed a modification of a large dataset (remove a column) to the hub. The push was interrupted after some files were committed to the repo. This left the dataset to raise an error on load_dataset() (ValueError couldnβt cast β¦ because column names do... | jorahn | https://github.com/huggingface/datasets/issues/5045 | null | false |
1,391,242,908 | 5,044 | integrate `load_from_disk` into `load_dataset` | open | [
"I agree the situation is not ideal and it would be awesome to use `load_dataset` to reload a dataset saved locally !\r\n\r\nFor context:\r\n\r\n- `load_dataset` works in three steps: download the dataset, then prepare it as an arrow dataset, and finally return a memory mapped arrow dataset. In particular it create... | 2022-09-29T17:37:12 | 2025-06-28T09:00:44 | null | **Is your feature request related to a problem? Please describe.**
Is it possible to make `load_dataset` more universal similar to `from_pretrained` in `transformers` so that it can handle the hub, and the local path datasets of all supported types?
Currently one has to choose a different loader depending on how ... | stas00 | https://github.com/huggingface/datasets/issues/5044 | null | false |
1,391,141,773 | 5,043 | Fix `flatten_indices` with empty indices mapping | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-29T16:17:28 | 2022-09-30T15:46:39 | 2022-09-30T15:44:25 | Fix #5038 | mariosasko | https://github.com/huggingface/datasets/pull/5043 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5043",
"html_url": "https://github.com/huggingface/datasets/pull/5043",
"diff_url": "https://github.com/huggingface/datasets/pull/5043.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5043.patch",
"merged_at": "2022-09-30T15:44... | true |
1,390,762,877 | 5,042 | Update swiss judgment prediction | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-29T12:10:02 | 2022-09-30T07:14:00 | 2022-09-29T14:32:02 | I forgot to add the new citation. | JoelNiklaus | https://github.com/huggingface/datasets/pull/5042 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5042",
"html_url": "https://github.com/huggingface/datasets/pull/5042",
"diff_url": "https://github.com/huggingface/datasets/pull/5042.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5042.patch",
"merged_at": "2022-09-29T14:32... | true |
1,390,722,230 | 5,041 | Support streaming hendrycks_test dataset. | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-29T11:37:58 | 2022-09-30T07:13:38 | 2022-09-29T12:07:29 | This PR:
- supports streaming
- fixes the description section of the dataset card | albertvillanova | https://github.com/huggingface/datasets/pull/5041 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5041",
"html_url": "https://github.com/huggingface/datasets/pull/5041",
"diff_url": "https://github.com/huggingface/datasets/pull/5041.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5041.patch",
"merged_at": "2022-09-29T12:07... | true |
1,390,566,428 | 5,040 | Fix NonMatchingChecksumError in hendrycks_test dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-29T09:37:43 | 2022-09-29T10:06:22 | 2022-09-29T10:04:19 | Update metadata JSON.
Fix #5039. | albertvillanova | https://github.com/huggingface/datasets/pull/5040 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5040",
"html_url": "https://github.com/huggingface/datasets/pull/5040",
"diff_url": "https://github.com/huggingface/datasets/pull/5040.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5040.patch",
"merged_at": "2022-09-29T10:04... | true |
1,390,353,315 | 5,039 | Hendrycks Checksum | closed | [
"Thanks for reporting, @DanielHesslow. We are fixing it. ",
"@albertvillanova thanks for taking care of this so quickly!",
"The dataset metadata is fixed. You can download it normally."
] | 2022-09-29T06:56:20 | 2022-09-29T10:23:30 | 2022-09-29T10:04:20 | Hi,
The checksum for [hendrycks_test](https://huggingface.co/datasets/hendrycks_test) does not compare correctly, I guess it has been updated on the remote.
```
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://people.eecs.berkeley.edu/~hendrycks/data.... | DanielHesslow | https://github.com/huggingface/datasets/issues/5039 | null | false |
1,389,631,122 | 5,038 | `Dataset.unique` showing wrong output after filtering | closed | [
"Hi! It seems like `flatten_indices` (called in `unique`) doesn't know how to handle empty indices mappings. I'm working on the fix.",
"Thanks, that was fast!"
] | 2022-09-28T16:20:35 | 2022-09-30T15:44:25 | 2022-09-30T15:44:25 | ## Describe the bug
After filtering a dataset, and if no samples remain, `Dataset.unique` will return the unique values of the unfiltered dataset.
## Steps to reproduce the bug
```python
from datasets import Dataset
dataset = Dataset.from_dict({'id': [0]})
dataset = dataset.filter(lambda _: False)
print(data... | mxschmdt | https://github.com/huggingface/datasets/issues/5038 | null | false |
1,389,244,722 | 5,037 | Improve CI performance speed of PackagedDatasetTest | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"There was a CI error which seemed unrelated: https://github.com/huggingface/datasets/actions/runs/3143581330/jobs/5111807056\r\n```\r\nFAILED tests/test_load.py::test_load_dataset_private_zipped_images[True] - FileNotFoundError: http... | 2022-09-28T12:08:16 | 2022-09-30T16:05:42 | 2022-09-30T16:03:24 | This PR improves PackagedDatasetTest CI performance speed. For Ubuntu (latest):
- Duration (without parallelism) before: 334.78s (5.58m)
- Duration (without parallelism) afterwards: 0.48s
The approach is passing a dummy `data_files` argument to load the builder, so that it avoids the slow inferring of it over the ... | albertvillanova | https://github.com/huggingface/datasets/pull/5037 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5037",
"html_url": "https://github.com/huggingface/datasets/pull/5037",
"diff_url": "https://github.com/huggingface/datasets/pull/5037.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5037.patch",
"merged_at": "2022-09-30T16:03... | true |
1,389,094,075 | 5,036 | Add oversampling strategy iterable datasets interleave | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-28T10:10:23 | 2022-09-30T12:30:48 | 2022-09-30T12:28:23 | Hello everyone,
Following the issue #4893 and the PR #4831, I propose here an oversampling strategy for a `IterableDataset` list.
The `all_exhausted` strategy stops building the new dataset as soon as all samples in each dataset have been added at least once.
It follows roughly the same logic behind #4831, namely... | ylacombe | https://github.com/huggingface/datasets/pull/5036 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5036",
"html_url": "https://github.com/huggingface/datasets/pull/5036",
"diff_url": "https://github.com/huggingface/datasets/pull/5036.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5036.patch",
"merged_at": "2022-09-30T12:28... | true |
1,388,914,476 | 5,035 | Fix typos in load docstrings and comments | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-28T08:05:07 | 2022-09-28T17:28:40 | 2022-09-28T17:26:15 | Minor fix of typos in load docstrings and comments | albertvillanova | https://github.com/huggingface/datasets/pull/5035 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5035",
"html_url": "https://github.com/huggingface/datasets/pull/5035",
"diff_url": "https://github.com/huggingface/datasets/pull/5035.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5035.patch",
"merged_at": "2022-09-28T17:26... | true |
1,388,855,136 | 5,034 | Update README.md of yahoo_answers_topics dataset | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5034). All of your documentation changes will be reflected on that endpoint.",
"Thanks, @borgr. We have removed all dataset scripts from this repo. Subsequent PRs should be opened directly on the Hugging Face Hub.",
"Do you m... | 2022-09-28T07:17:33 | 2022-10-06T15:56:05 | 2022-10-04T13:49:25 | null | borgr | https://github.com/huggingface/datasets/pull/5034 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5034",
"html_url": "https://github.com/huggingface/datasets/pull/5034",
"diff_url": "https://github.com/huggingface/datasets/pull/5034.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5034.patch",
"merged_at": null
} | true |
1,388,842,236 | 5,033 | Remove redundant code from some dataset module factories | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-28T07:06:26 | 2022-09-28T16:57:51 | 2022-09-28T16:55:12 | This PR removes some redundant code introduced by mistake after a refactoring in:
- #4576 | albertvillanova | https://github.com/huggingface/datasets/pull/5033 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5033",
"html_url": "https://github.com/huggingface/datasets/pull/5033",
"diff_url": "https://github.com/huggingface/datasets/pull/5033.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5033.patch",
"merged_at": "2022-09-28T16:55... | true |
1,388,270,935 | 5,032 | new dataset type: single-label and multi-label video classification | open | [
"Hi ! You can in the `features` folder how we implemented the audio and image feature types.\r\n\r\nWe can have something similar to videos. What we need to decide:\r\n- the video loading library to use\r\n- the output format when a user accesses a video type object\r\n- what parameters a `Video()` feature type nee... | 2022-09-27T19:40:11 | 2022-11-02T19:10:13 | null | **Is your feature request related to a problem? Please describe.**
In my research, I am dealing with multi-modal (audio+text+frame sequence) video classification. It would be great if the datasets library supported generating multi-modal batches from a video dataset.
**Describe the solution you'd like**
Assume I h... | fcakyon | https://github.com/huggingface/datasets/issues/5032 | null | false |
1,388,201,146 | 5,031 | Support hfh 0.10 implicit auth | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq it is now released so you can move forward with it :) ",
"I took your comments into account @Wauplin :)\r\nI also bumped the requirement to 0.2.0 because we're using `set_access_token`\r\n\r\ncc @albertvillanova WDYT ? I e... | 2022-09-27T18:37:49 | 2022-09-30T09:18:24 | 2022-09-30T09:15:59 | In huggingface-hub 0.10 the `token` parameter is deprecated for dataset_info and list_repo_files in favor of use_auth_token.
Moreover if use_auth_token=None then the user's token is used implicitly.
I took those two changes into account
Close https://github.com/huggingface/datasets/issues/4990
TODO:
- [x] fi... | lhoestq | https://github.com/huggingface/datasets/pull/5031 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5031",
"html_url": "https://github.com/huggingface/datasets/pull/5031",
"diff_url": "https://github.com/huggingface/datasets/pull/5031.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5031.patch",
"merged_at": "2022-09-30T09:15... | true |
1,388,061,340 | 5,030 | Fast dataset iter | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I ran some benchmarks (focused on the data fetching part of `__iter__`) and it seems like the combination `table.to_reader(batch_size)` + `RecordBatch.slice` performs the best ([script](https://gist.github.com/mariosasko/0248288a2e3a... | 2022-09-27T16:44:51 | 2022-09-29T15:50:44 | 2022-09-29T15:48:17 | Use `pa.Table.to_reader` to make iteration over examples/batches faster in `Dataset.{__iter__, map}`
TODO:
* [x] benchmarking (the only benchmark for now - iterating over (single) examples of `bookcorpus` (75 mil examples) in Colab is approx. 2.3x faster)
* [x] check if iterating over bigger chunks + slicing to fe... | mariosasko | https://github.com/huggingface/datasets/pull/5030 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5030",
"html_url": "https://github.com/huggingface/datasets/pull/5030",
"diff_url": "https://github.com/huggingface/datasets/pull/5030.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5030.patch",
"merged_at": "2022-09-29T15:48... | true |
1,387,600,960 | 5,029 | Fix import in `ClassLabel` docstring example | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-27T11:35:29 | 2022-09-27T14:03:24 | 2022-09-27T12:27:50 | This PR addresses a super-simple fix: adding a missing `import` to the `ClassLabel` docstring example, as it was formatted as `from datasets Features`, so it's been fixed to `from datasets import Features`. | alvarobartt | https://github.com/huggingface/datasets/pull/5029 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5029",
"html_url": "https://github.com/huggingface/datasets/pull/5029",
"diff_url": "https://github.com/huggingface/datasets/pull/5029.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5029.patch",
"merged_at": "2022-09-27T12:27... | true |
1,386,272,533 | 5,028 | passing parameters to the method passed to Dataset.from_generator() | closed | [
"Hi! Yes, you can either use the `gen_kwargs` param in `Dataset.from_generator` (`ds = Dataset.from_generator(gen, gen_kwargs={\"param1\": val})`) or wrap the generator function with `functools.partial`\r\n(`ds = Dataset.from_generator(functools.partial(gen, param1=\"val\"))`) to pass custom parameters to it.\r\n"
... | 2022-09-26T15:20:06 | 2022-10-03T13:00:00 | 2022-10-03T13:00:00 | Big thanks for providing dataset creation via a generator.
I want to ask whether there is any way that parameters can be passed to the method Dataset.from_generator() method, like as follows.
```
from datasets import Dataset
def gen(param1):
for idx in len(custom_dataset):
yield custom_dataset[id... | Basir-mahmood | https://github.com/huggingface/datasets/issues/5028 | null | false |
1,386,153,072 | 5,027 | Fix typo in error message | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-26T14:10:09 | 2022-09-27T12:28:03 | 2022-09-27T12:26:02 | null | severo | https://github.com/huggingface/datasets/pull/5027 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5027",
"html_url": "https://github.com/huggingface/datasets/pull/5027",
"diff_url": "https://github.com/huggingface/datasets/pull/5027.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5027.patch",
"merged_at": "2022-09-27T12:26... | true |
1,386,071,154 | 5,026 | patch CI_HUB_TOKEN_PATH with Path instead of str | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-26T13:19:01 | 2022-09-26T14:30:55 | 2022-09-26T14:28:45 | Should fix the tests for `huggingface_hub==0.10.0rc0` prerelease (see [failed CI](https://github.com/huggingface/datasets/actions/runs/3127805250/jobs/5074879144)).
Related to [this thread](https://huggingface.slack.com/archives/C02V5EA0A95/p1664195165294559) (internal link).
Note: this should be a backward compat... | Wauplin | https://github.com/huggingface/datasets/pull/5026 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5026",
"html_url": "https://github.com/huggingface/datasets/pull/5026",
"diff_url": "https://github.com/huggingface/datasets/pull/5026.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5026.patch",
"merged_at": "2022-09-26T14:28... | true |
1,386,011,239 | 5,025 | Custom Json Dataset Throwing Error when batch is False | closed | [
"Hi! Our processors are meant to be used in `batched` mode, so if `batched` is `False`, you need to drop the batch dimension (the error message warns you that the array has an extra dimension meaning it's 4D instead of 3D) to avoid the error:\r\n```python\r\ndef prepare_examples(examples):\r\n #Some preporcessin... | 2022-09-26T12:38:39 | 2022-09-27T19:50:00 | 2022-09-27T19:50:00 | ## Describe the bug
A clear and concise description of what the bug is.
I tried to create my custom dataset using below code
```
from datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D
from torchvision import transforms
from transformers import AutoProcessor
# we'll use the Auto API here -... | jmandivarapu1 | https://github.com/huggingface/datasets/issues/5025 | null | false |
1,385,947,624 | 5,024 | Fix string features of xcsr dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-26T11:55:36 | 2022-09-28T07:56:18 | 2022-09-28T07:54:19 | This PR fixes string features of `xcsr` dataset to avoid character splitting.
Fix #5023.
CC: @yangxqiao, @yuchenlin | albertvillanova | https://github.com/huggingface/datasets/pull/5024 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5024",
"html_url": "https://github.com/huggingface/datasets/pull/5024",
"diff_url": "https://github.com/huggingface/datasets/pull/5024.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5024.patch",
"merged_at": "2022-09-28T07:54... | true |
1,385,881,112 | 5,023 | Text strings are split into lists of characters in xcsr dataset | closed | [] | 2022-09-26T11:11:50 | 2022-09-28T07:54:20 | 2022-09-28T07:54:20 | ## Describe the bug
Text strings are split into lists of characters.
Example for "X-CSQA-en":
```
{'id': 'd3845adc08414fda',
'lang': 'en',
'question': {'stem': ['T',
'h',
'e',
' ',
'd',
'e',
'n',
't',
'a',
'l',
' ',
'o',
'f',
'f',
'i',
'c',
'e',
... | albertvillanova | https://github.com/huggingface/datasets/issues/5023 | null | false |
1,385,432,859 | 5,022 | Fix languages of X-CSQA configs in xcsr dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @lhoestq, I had missed that... ",
"thx for the super fast work @albertvillanova ! any estimate for when the relevant release will happen?\r\n\r\nThanks again ",
"@thesofakillers after a recent change in our library (see #4... | 2022-09-26T05:13:39 | 2022-09-26T12:27:20 | 2022-09-26T10:57:30 | Fix #5017.
CC: @yangxqiao, @yuchenlin | albertvillanova | https://github.com/huggingface/datasets/pull/5022 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5022",
"html_url": "https://github.com/huggingface/datasets/pull/5022",
"diff_url": "https://github.com/huggingface/datasets/pull/5022.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5022.patch",
"merged_at": "2022-09-26T10:57... | true |
1,385,351,250 | 5,021 | Split is inferred from filename and overrides metadata.jsonl | closed | [
"Hi! What's the structure of your image folder? `datasets` by default tries to infer to what split each file belongs based on directory/file names. If it's OK to load all the images inside the `dataset` folder in the `train` split, you can do the following:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", da... | 2022-09-26T03:22:14 | 2022-09-29T08:07:50 | 2022-09-29T08:07:50 | ## Describe the bug
Including the strings "test" or "train" anywhere in a filename causes `datasets` to infer the split and silently ignore all other files.
This behavior is documented for directory names but not filenames: https://huggingface.co/docs/datasets/image_dataset#imagefolder
## Steps to reproduce th... | float-trip | https://github.com/huggingface/datasets/issues/5021 | null | false |
1,384,684,078 | 5,020 | Fix URLs of sbu_captions dataset | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-24T14:00:33 | 2022-09-28T07:20:20 | 2022-09-28T07:18:23 | Forbidden
You don't have permission to access /~vicente/sbucaptions/sbu-captions-all.tar.gz on this server.
Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request.
Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.2k-fips PHP/5.4.16 mod_fcgid/2.3.9 mod_ws... | donglixp | https://github.com/huggingface/datasets/pull/5020 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5020",
"html_url": "https://github.com/huggingface/datasets/pull/5020",
"diff_url": "https://github.com/huggingface/datasets/pull/5020.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5020.patch",
"merged_at": "2022-09-28T07:18... | true |
1,384,673,718 | 5,019 | Update swiss judgment prediction | closed | [
"Thank you very much for the detailed review @albertvillanova!\r\n\r\nI updated the PR with the requested changes. ",
"At the end, I had to manually fix the conflict, so that CI tests are launched.\r\n\r\nPLEASE NOTE: you should first pull to incorporate the previous commit\r\n```shell\r\ngit pull\r\n```",
"_Th... | 2022-09-24T13:28:57 | 2022-09-28T07:13:39 | 2022-09-28T05:48:50 | Hi,
I updated the dataset to include additional data made available recently. When I test it locally, it seems to work. However, I get the following error with the dummy data creation:
`Dummy data generation done but dummy data test failed since splits ['train', 'validation', 'test'] have 0 examples for config 'fr... | JoelNiklaus | https://github.com/huggingface/datasets/pull/5019 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5019",
"html_url": "https://github.com/huggingface/datasets/pull/5019",
"diff_url": "https://github.com/huggingface/datasets/pull/5019.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5019.patch",
"merged_at": "2022-09-28T05:48... | true |
1,384,146,585 | 5,018 | Create all YAML dataset_info | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5018). All of your documentation changes will be reflected on that endpoint.",
"Closing since https://github.com/huggingface/datasets/pull/4974 removed all the datasets scripts.\r\n\r\nIndividual PRs must be opened on the Huggi... | 2022-09-23T18:08:15 | 2023-09-24T09:33:21 | 2022-10-03T17:08:05 | Following https://github.com/huggingface/datasets/pull/4926
Creates all the `dataset_info` YAML fields in the dataset cards
The JSON are also updated using the simplified backward compatible format added in https://github.com/huggingface/datasets/pull/4926
Needs https://github.com/huggingface/datasets/pull/4926 ... | lhoestq | https://github.com/huggingface/datasets/pull/5018 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5018",
"html_url": "https://github.com/huggingface/datasets/pull/5018",
"diff_url": "https://github.com/huggingface/datasets/pull/5018.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5018.patch",
"merged_at": null
} | true |
1,384,022,463 | 5,017 | xcsr: X-CSQA simply uses english for all alleged non-english data | closed | [
"Thanks for reporting, @thesofakillers. Good catch. We are fixing this. "
] | 2022-09-23T16:11:54 | 2022-09-26T10:57:31 | 2022-09-26T10:57:31 | ## Describe the bug
All the alleged non-english subcollections for the X-CSQA task in the [xcsr benchmark dataset ](https://huggingface.co/datasets/xcsr) seem to be copies of the english subcollection, rather than translations. This is in contrast to the data description:
> we automatically translate the original C... | thesofakillers | https://github.com/huggingface/datasets/issues/5017 | null | false |
1,383,883,058 | 5,016 | Fix tar extraction vuln | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-23T14:22:21 | 2022-09-29T12:42:26 | 2022-09-29T12:40:28 | Fix for CVE-2007-4559
Description:
Directory traversal vulnerability in the (1) extract and (2) extractall functions in the tarfile
module in Python allows user-assisted remote attackers to overwrite arbitrary files via a .. (dot dot)
sequence in filenames in a TAR archive, a related issue to CVE-2001-1267.
I ... | lhoestq | https://github.com/huggingface/datasets/pull/5016 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5016",
"html_url": "https://github.com/huggingface/datasets/pull/5016",
"diff_url": "https://github.com/huggingface/datasets/pull/5016.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5016.patch",
"merged_at": "2022-09-29T12:40... | true |
1,383,485,558 | 5,015 | Transfer dataset scripts to Hub | closed | [
"Sounds good ! Can I help with anything ?"
] | 2022-09-23T08:48:10 | 2022-10-05T07:15:57 | 2022-10-05T07:15:57 | Before merging:
- #4974
TODO:
- [x] Create label: ["dataset contribution"](https://github.com/huggingface/datasets/pulls?q=label%3A%22dataset+contribution%22)
- [x] Create project: [Datasets: Transfer datasets to Hub](https://github.com/orgs/huggingface/projects/22/)
- [x] PRs:
- [x] Add dataset: we should r... | albertvillanova | https://github.com/huggingface/datasets/issues/5015 | null | false |
1,383,422,639 | 5,014 | I need to read the custom dataset in conll format | open | [
"Hi! We don't currently have a builder for parsing custom `conll` datasets, but I guess we could add one as a packaged module (similarly to what [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/dataset_builders/conll/conll_dataset_builder.py) did). @lhoestq @albertvillanova WDYT?\r... | 2022-09-23T07:49:42 | 2022-11-02T11:57:15 | null | I need to read the custom dataset in conll format
| shell-nlp | https://github.com/huggingface/datasets/issues/5014 | null | false |
1,383,415,971 | 5,013 | would huggingface like publish cpp binding for datasets package ? | closed | [
"Hi ! Can you share more information about your use case ? How could it help you to have cpp bindings versus using the python libraries ?",
"> Hi ! Can you share more information about your use case ? How could it help you to have cpp bindings versus using the python libraries ?\r\n\r\nfor example ,the huggingfac... | 2022-09-23T07:42:49 | 2023-02-24T16:20:57 | 2023-02-24T16:20:57 | HI:
I use cpp env libtorch, I like use hugggingface ,but huggingface not cpp binding, would you like publish cpp binding for it.
thanks | mullerhai | https://github.com/huggingface/datasets/issues/5013 | null | false |
1,382,851,096 | 5,012 | Force JSON format regardless of file naming on S3 | closed | [
"Hi ! Support for URIs like `s3://...` is not implemented yet in `data_files=`. You can use the HTTP URL instead if your data is public in the meantime",
"Hi,\r\nI want to make sure I understand this response. I have a set of files on S3 that are private for security reasons. Because they are not public files I ... | 2022-09-22T18:28:15 | 2023-08-16T09:58:36 | 2023-08-16T09:58:36 | I have a file on S3 created by Data Version Control, it looks like `s3://dvc/ac/badff5b134382a0f25248f1b45d7b2` but contains a json file. If I run
```python
dataset = load_dataset(
"json",
data_files='s3://dvc/ac/badff5b134382a0f25248f1b45d7b2'
)
```
It gives me
```
InvalidSchema: No connection adap... | junwang-wish | https://github.com/huggingface/datasets/issues/5012 | null | false |
1,382,609,587 | 5,011 | Audio: `encode_example` fails with IndexError | closed | [
"Sorry bug on my part π
Closing "
] | 2022-09-22T15:07:27 | 2022-09-23T09:05:18 | 2022-09-23T09:05:18 | ## Describe the bug
Loading the dataset [earnings-22](https://huggingface.co/datasets/sanchit-gandhi/earnings22_split) from the Hub yields an Index Error. I created this dataset locally and then pushed to hub at the specified URL. Thus, I expect the dataset should work out-of-the-box! Indeed, the dataset viewer functi... | sanchit-gandhi | https://github.com/huggingface/datasets/issues/5011 | null | false |
1,382,308,799 | 5,010 | Add deprecation warning to multilingual_librispeech dataset card | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-22T11:41:59 | 2022-09-23T12:04:37 | 2022-09-23T12:02:45 | Besides the current deprecation warning in the script of `multilingual_librispeech`, this PR adds a deprecation warning to its dataset card as well.
The format of the deprecation warning is aligned with the one in the library documentation when docstrings contain the `<Deprecated/>` tag.
Related to:
- #4060 | albertvillanova | https://github.com/huggingface/datasets/pull/5010 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5010",
"html_url": "https://github.com/huggingface/datasets/pull/5010",
"diff_url": "https://github.com/huggingface/datasets/pull/5010.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5010.patch",
"merged_at": "2022-09-23T12:02... | true |
1,381,194,067 | 5,009 | Error loading StonyBrookNLP/tellmewhy dataset from hub even though local copy loads correctly | closed | [
"I think this is because some columns are mostly empty lists. In particular the train and validation splits only have empty lists for `val_ann`. Therefore the type inference doesn't know which type is inside (or it would have to scan the other splits first before knowing).\r\n\r\nYou can fix that by specifying the ... | 2022-09-21T16:23:06 | 2022-09-29T13:07:29 | 2022-09-29T13:07:29 | ## Describe the bug
I have added a new dataset with the identifier `StonyBrookNLP/tellmewhy` to the hub. When I load the individual files using my local copy using `dataset = datasets.load_dataset("json", data_files="data/train.jsonl")`, it loads the dataset correctly. However, when I try to load it from the hub, I ge... | ykl7 | https://github.com/huggingface/datasets/issues/5009 | null | false |
1,381,090,903 | 5,008 | Re-apply input columns change | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-21T15:09:01 | 2022-09-22T13:57:36 | 2022-09-22T13:55:23 | Fixes the `filter` + `input_columns` combination, which is used in the `transformers` examples for instance.
Revert #5006 (which in turn reverts #4971)
Fix https://github.com/huggingface/datasets/issues/4858 | mariosasko | https://github.com/huggingface/datasets/pull/5008 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5008",
"html_url": "https://github.com/huggingface/datasets/pull/5008",
"diff_url": "https://github.com/huggingface/datasets/pull/5008.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5008.patch",
"merged_at": "2022-09-22T13:55... | true |
1,381,007,607 | 5,007 | Add some note about running the transformers ci before a release | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-21T14:14:25 | 2022-09-22T10:16:14 | 2022-09-22T10:14:06 | null | lhoestq | https://github.com/huggingface/datasets/pull/5007 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5007",
"html_url": "https://github.com/huggingface/datasets/pull/5007",
"diff_url": "https://github.com/huggingface/datasets/pull/5007.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5007.patch",
"merged_at": "2022-09-22T10:14... | true |
1,380,968,395 | 5,006 | Revert input_columns change | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging this one and I'll check if it fixes the `transformers` CI before doing a patch release"
] | 2022-09-21T13:49:20 | 2022-09-21T14:14:33 | 2022-09-21T14:11:57 | Revert https://github.com/huggingface/datasets/pull/4971
Fix https://github.com/huggingface/datasets/issues/5005 | lhoestq | https://github.com/huggingface/datasets/pull/5006 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5006",
"html_url": "https://github.com/huggingface/datasets/pull/5006",
"diff_url": "https://github.com/huggingface/datasets/pull/5006.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5006.patch",
"merged_at": "2022-09-21T14:11... | true |
1,380,952,960 | 5,005 | Release 2.5.0 breaks transformers CI | closed | [
"Shall we revert https://github.com/huggingface/datasets/pull/4971 @mariosasko ?\r\n\r\nAnd for consistency we can update IterableDataset.map later"
] | 2022-09-21T13:39:19 | 2022-09-21T14:11:57 | 2022-09-21T14:11:57 | ## Describe the bug
As reported by @lhoestq:
> see https://app.circleci.com/pipelines/github/huggingface/transformers/47634/workflows/b491886b-e66e-4edb-af96-8b459e72aa25/jobs/564563
this is used here: [https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55[β¦]torch/speech-pretraining/ru... | albertvillanova | https://github.com/huggingface/datasets/issues/5005 | null | false |
1,380,860,606 | 5,004 | Remove license tag file and validation | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-21T12:35:14 | 2022-09-22T11:47:41 | 2022-09-22T11:45:46 | As requested, we are removing the validation of the licenses from `datasets` because this is done on the Hub.
Fix #4994.
Related to:
- #4926, which is removing all the validation from `datasets` | albertvillanova | https://github.com/huggingface/datasets/pull/5004 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5004",
"html_url": "https://github.com/huggingface/datasets/pull/5004",
"diff_url": "https://github.com/huggingface/datasets/pull/5004.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5004.patch",
"merged_at": "2022-09-22T11:45... | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.