id int64 599M 3.48B | number int64 1 7.8k | title stringlengths 1 290 | state stringclasses 2
values | comments listlengths 0 30 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-10-05 06:37:50 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-10-05 10:32:43 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-10-01 13:56:03 β | body stringlengths 0 228k β | user stringlengths 3 26 | html_url stringlengths 46 51 | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,467,719,635 | 5,310 | Support xPath for Windows pathnames | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-29T09:20:47 | 2022-11-30T12:00:09 | 2022-11-30T11:57:16 | This PR implements a string representation of `xPath`, which is valid for local paths (also windows) and remote URLs.
Additionally, some `os.path` methods are fixed for remote URLs on Windows machines.
Now, on Windows machines:
```python
In [2]: str(xPath("C:\\dir\\file.txt"))
Out[2]: 'C:\\dir\\file.txt'
In [... | albertvillanova | https://github.com/huggingface/datasets/pull/5310 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5310",
"html_url": "https://github.com/huggingface/datasets/pull/5310",
"diff_url": "https://github.com/huggingface/datasets/pull/5310.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5310.patch",
"merged_at": "2022-11-30T11:57... | true |
1,466,758,987 | 5,309 | Close stream in `ArrowWriter.finalize` before inference error | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-28T16:59:39 | 2022-12-07T12:55:20 | 2022-12-07T12:52:15 | Ensure the file stream is closed in `ArrowWriter.finalize` before raising the `SchemaInferenceError` to avoid the `PermissionError` on Windows in `incomplete_dir`'s `shutil.rmtree`. | mariosasko | https://github.com/huggingface/datasets/pull/5309 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5309",
"html_url": "https://github.com/huggingface/datasets/pull/5309",
"diff_url": "https://github.com/huggingface/datasets/pull/5309.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5309.patch",
"merged_at": "2022-12-07T12:52... | true |
1,466,552,281 | 5,308 | Support `topdown` parameter in `xwalk` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I like the `kwargs` approach, thanks!"
] | 2022-11-28T14:42:41 | 2022-12-09T12:58:55 | 2022-12-09T12:55:59 | Add support for the `topdown` parameter in `xwalk` when `fsspec>=2022.11.0` is installed. | mariosasko | https://github.com/huggingface/datasets/pull/5308 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5308",
"html_url": "https://github.com/huggingface/datasets/pull/5308",
"diff_url": "https://github.com/huggingface/datasets/pull/5308.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5308.patch",
"merged_at": "2022-12-09T12:55... | true |
1,466,477,427 | 5,307 | Use correct dataset type in `from_generator` docs | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-28T13:59:10 | 2022-11-28T15:30:37 | 2022-11-28T15:27:26 | Use the correct dataset type in the `from_generator` docs (example with sharding). | mariosasko | https://github.com/huggingface/datasets/pull/5307 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5307",
"html_url": "https://github.com/huggingface/datasets/pull/5307",
"diff_url": "https://github.com/huggingface/datasets/pull/5307.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5307.patch",
"merged_at": "2022-11-28T15:27... | true |
1,465,968,639 | 5,306 | Can't use custom feature description when loading a dataset | closed | [
"Forgot to actually convert the feature dict to a Feature object. Closing."
] | 2022-11-28T07:55:44 | 2022-11-28T08:11:45 | 2022-11-28T08:11:44 | ### Describe the bug
I have created a feature dictionary to describe my datasets' column types, to use when loading the dataset, following [the doc](https://huggingface.co/docs/datasets/main/en/about_dataset_features). It crashes at dataset load.
### Steps to reproduce the bug
```python
# Creating features
task_... | clefourrier | https://github.com/huggingface/datasets/issues/5306 | null | false |
1,465,627,826 | 5,305 | Dataset joelito/mc4_legal does not work with multiple files | closed | [
"Thanks for reporting @JoelNiklaus.\r\n\r\nPlease note that since we moved all dataset loading scripts to the Hub, the issues and pull requests relative to specific datasets are directly handled on the Hub, in their Community tab. I'm transferring this issue there: https://huggingface.co/datasets/joelito/mc4_legal/... | 2022-11-28T00:16:16 | 2022-11-28T07:22:42 | 2022-11-28T07:22:42 | ### Describe the bug
The dataset https://huggingface.co/datasets/joelito/mc4_legal works for languages like bg with a single data file, but not for languages with multiple files like de. It shows zero rows for the de dataset.
joelniklaus@Joels-MacBook-Pro ~/N/P/C/L/p/m/mc4_legal (main) [1]> python test_mc4_legal.... | JoelNiklaus | https://github.com/huggingface/datasets/issues/5305 | null | false |
1,465,110,367 | 5,304 | timit_asr doesn't load the test split. | closed | [
"The [timit_asr.py](https://huggingface.co/datasets/timit_asr/blob/main/timit_asr.py) script iterates over the WAV files per split directory using this:\r\n```python\r\nwav_paths = sorted(Path(data_dir).glob(f\"**/{split}/**/*.wav\"))\r\nwav_paths = wav_paths if wav_paths else sorted(Path(data_dir).glob(f\"**/{spli... | 2022-11-26T10:18:22 | 2023-02-10T16:33:21 | 2023-02-10T16:33:21 | ### Describe the bug
When I use the function ```timit = load_dataset('timit_asr', data_dir=data_dir)```, it only loads train split, not test split.
I tried to change the directory and filename to lower case to upper case for the test split, but it does not work at all.
```python
DatasetDict({
train: Datase... | seyong92 | https://github.com/huggingface/datasets/issues/5304 | null | false |
1,464,837,251 | 5,303 | Skip dataset verifications by default | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"100% agree that the checksum verification is overkill and not super useful. But I think this PR would also disable the check on num_examples no ?\r\n \r\nAs a user I would like to know if the dataset I'm loading changed significantly... | 2022-11-25T18:39:09 | 2023-02-13T16:50:42 | 2023-02-13T16:43:47 | Skip the dataset verifications (split and checksum verifications, duplicate keys check) by default unless a dataset is being tested (`datasets-cli test/run_beam`). The main goal is to avoid running the checksum check in the default case due to how expensive it can be for large datasets.
PS: Maybe we should deprecate... | mariosasko | https://github.com/huggingface/datasets/pull/5303 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5303",
"html_url": "https://github.com/huggingface/datasets/pull/5303",
"diff_url": "https://github.com/huggingface/datasets/pull/5303.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5303.patch",
"merged_at": "2023-02-13T16:43... | true |
1,464,778,901 | 5,302 | Improve `use_auth_token` docstring and deprecate `use_auth_token` in `download_and_prepare` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-25T17:09:21 | 2022-12-09T14:20:15 | 2022-12-09T14:17:20 | Clarify in the docstrings what happens when `use_auth_token` is `None` and deprecate the `use_auth_token` param in `download_and_prepare`. | mariosasko | https://github.com/huggingface/datasets/pull/5302 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5302",
"html_url": "https://github.com/huggingface/datasets/pull/5302",
"diff_url": "https://github.com/huggingface/datasets/pull/5302.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5302.patch",
"merged_at": "2022-12-09T14:17... | true |
1,464,749,156 | 5,301 | Return a split Dataset in load_dataset | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5301). All of your documentation changes will be reflected on that endpoint.",
"Just noticed that now we have to deal with indexed & split datasets. The remaining tests are failing because one should be able to get an indexed d... | 2022-11-25T16:35:54 | 2023-09-24T10:06:15 | 2023-02-21T13:13:13 | ...instead of a DatasetDict.
```python
# now supported
ds = load_dataset("squad")
ds[0]
for example in ds:
pass
# still works
ds["train"]
ds["validation"]
# new
ds.splits # Dict[str, Dataset] | None
# soon to be supported (not in this PR)
ds = load_dataset("dataset_with_no_splits")
ds[0]
f... | lhoestq | https://github.com/huggingface/datasets/pull/5301 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5301",
"html_url": "https://github.com/huggingface/datasets/pull/5301",
"diff_url": "https://github.com/huggingface/datasets/pull/5301.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5301.patch",
"merged_at": null
} | true |
1,464,697,136 | 5,300 | Use same `num_proc` for dataset download and generation | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I noticed this bug the other day and was going to look into it! \"Where are these processes coming from?\" ;-)"
] | 2022-11-25T15:37:42 | 2022-12-07T12:55:39 | 2022-12-07T12:52:51 | Use the same `num_proc` value for data download and generation. Additionally, do not set `num_proc` to 16 in `DownloadManager` by default (`num_proc` now has to be specified explicitly). | mariosasko | https://github.com/huggingface/datasets/pull/5300 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5300",
"html_url": "https://github.com/huggingface/datasets/pull/5300",
"diff_url": "https://github.com/huggingface/datasets/pull/5300.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5300.patch",
"merged_at": "2022-12-07T12:52... | true |
1,464,695,091 | 5,299 | Fix xopen for Windows pathnames | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-25T15:35:28 | 2022-11-29T08:23:58 | 2022-11-29T08:21:24 | This PR fixes a bug in `xopen` function for Windows pathnames.
Fix #5298. | albertvillanova | https://github.com/huggingface/datasets/pull/5299 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5299",
"html_url": "https://github.com/huggingface/datasets/pull/5299",
"diff_url": "https://github.com/huggingface/datasets/pull/5299.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5299.patch",
"merged_at": "2022-11-29T08:21... | true |
1,464,681,871 | 5,298 | Bug in xopen with Windows pathnames | closed | [] | 2022-11-25T15:21:32 | 2022-11-29T08:21:25 | 2022-11-29T08:21:25 | Currently, `xopen` function has a bug with local Windows pathnames:
From its implementation:
```python
def xopen(file: str, mode="r", *args, **kwargs):
file = _as_posix(PurePath(file))
main_hop, *rest_hops = file.split("::")
if is_local_path(main_hop):
return open(file, mode, *args, **kwarg... | albertvillanova | https://github.com/huggingface/datasets/issues/5298 | null | false |
1,464,554,491 | 5,297 | Fix xjoin for Windows pathnames | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-25T13:30:17 | 2022-11-29T08:07:39 | 2022-11-29T08:05:12 | This PR fixes a bug in `xjoin` function with Windows pathnames.
Fix #5296. | albertvillanova | https://github.com/huggingface/datasets/pull/5297 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5297",
"html_url": "https://github.com/huggingface/datasets/pull/5297",
"diff_url": "https://github.com/huggingface/datasets/pull/5297.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5297.patch",
"merged_at": "2022-11-29T08:05... | true |
1,464,553,580 | 5,296 | Bug in xjoin with Windows pathnames | closed | [] | 2022-11-25T13:29:33 | 2022-11-29T08:05:13 | 2022-11-29T08:05:13 | Currently, `xjoin` function has a bug with local Windows pathnames: instead of returning the OS-dependent join pathname, it always returns it in POSIX format.
```python
from datasets.download.streaming_download_manager import xjoin
path = xjoin("C:\\Users\\USERNAME", "filename.txt")
```
Join path should be:
... | albertvillanova | https://github.com/huggingface/datasets/issues/5296 | null | false |
1,464,006,743 | 5,295 | Extractions failed when .zip file located on read-only path (e.g., SageMaker FastFile mode) | closed | [
"Hi ! Thanks for reporting. Indeed the lock file should be placed in a directory with write permission (e.g. in the directory where the archive is extracted).",
"I opened https://github.com/huggingface/datasets/pull/5320 to fix this - it places the lock file in the cache directory instead of trying to put in next... | 2022-11-25T03:59:43 | 2023-07-21T14:39:09 | 2023-07-21T14:39:09 | ### Describe the bug
Hi,
`load_dataset()` does not work .zip files located on a read-only directory. Looks like it's because Dataset creates a lock file in the [same directory](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/utils/extract.py) as the .zip file.
... | verdimrc | https://github.com/huggingface/datasets/issues/5295 | null | false |
1,463,679,582 | 5,294 | Support streaming datasets with pathlib.Path.with_suffix | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-24T18:04:38 | 2022-11-29T07:09:08 | 2022-11-29T07:06:32 | This PR extends the support in streaming mode for datasets that use `pathlib.Path.with_suffix`.
Fix #5293. | albertvillanova | https://github.com/huggingface/datasets/pull/5294 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5294",
"html_url": "https://github.com/huggingface/datasets/pull/5294",
"diff_url": "https://github.com/huggingface/datasets/pull/5294.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5294.patch",
"merged_at": "2022-11-29T07:06... | true |
1,463,669,201 | 5,293 | Support streaming datasets with pathlib.Path.with_suffix | closed | [] | 2022-11-24T17:52:08 | 2022-11-29T07:06:33 | 2022-11-29T07:06:33 | Extend support for streaming datasets that use `pathlib.Path.with_suffix`.
This feature will be useful e.g. for datasets containing text files and annotated files with the same name but different extension. | albertvillanova | https://github.com/huggingface/datasets/issues/5293 | null | false |
1,463,053,832 | 5,292 | Missing documentation build for versions 2.7.1 and 2.6.2 | closed | [
"- Build docs for 2.6.2:\r\n - Commit: a6a5a1cf4cdf1e0be65168aed5a327f543001fe8\r\n - Build docs GH Action: https://github.com/huggingface/datasets/actions/runs/3539470622/jobs/5941404044\r\n- Build docs for 2.7.1:\r\n - Commit: 5ef1ab1cc06c2b7a574bf2df454cd9fcb071ccb2\r\n - Build docs GH Action: https://github... | 2022-11-24T09:42:10 | 2022-11-24T10:10:02 | 2022-11-24T10:10:02 | After the patch releases [2.7.1](https://github.com/huggingface/datasets/releases/tag/2.7.1) and [2.6.2](https://github.com/huggingface/datasets/releases/tag/2.6.2), the online docs were not properly built (the build_documentation workflow was not triggered).
There was a fix by:
- #5291
However, both documentati... | albertvillanova | https://github.com/huggingface/datasets/issues/5292 | null | false |
1,462,983,472 | 5,291 | [build doc] for v2.7.1 & v2.6.2 | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"doc versions are built https://huggingface.co/docs/datasets/index"
] | 2022-11-24T08:54:47 | 2022-11-24T09:14:10 | 2022-11-24T09:11:15 | Do NOT merge. Using this PR to build docs for [v2.7.1](https://github.com/huggingface/datasets/pull/5291/commits/f4914af20700f611b9331a9e3ba34743bbeff934) & [v2.6.2](https://github.com/huggingface/datasets/pull/5291/commits/025f85300a0874eeb90a20393c62f25ac0accaa0) | mishig25 | https://github.com/huggingface/datasets/pull/5291 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5291",
"html_url": "https://github.com/huggingface/datasets/pull/5291",
"diff_url": "https://github.com/huggingface/datasets/pull/5291.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5291.patch",
"merged_at": null
} | true |
1,462,716,766 | 5,290 | fix error where reading breaks when batch missing an assigned column feature | open | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5290). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-24T03:53:46 | 2022-11-25T03:21:54 | null | null | eunseojo | https://github.com/huggingface/datasets/pull/5290 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5290",
"html_url": "https://github.com/huggingface/datasets/pull/5290",
"diff_url": "https://github.com/huggingface/datasets/pull/5290.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5290.patch",
"merged_at": null
} | true |
1,462,543,139 | 5,289 | Added support for JXL images. | open | [
"I'm fine with the addition of jxl in the list of known image extensions, this way users that have the plugin can work with their JXL datasets. WDYT @mariosasko ?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5289). All of your documentation changes will be reflected on ... | 2022-11-23T23:16:33 | 2022-11-29T18:49:46 | null | JPEG-XL is the most advanced of the next-generation of image codecs, supporting both lossless and lossy files β with better compression and quality than PNG and JPG respectively. It has reduced the disk sizes and bandwidth required for many of the datasets I use.
Pillow does not yet support JXL, but there's a plugi... | alexjc | https://github.com/huggingface/datasets/pull/5289 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5289",
"html_url": "https://github.com/huggingface/datasets/pull/5289",
"diff_url": "https://github.com/huggingface/datasets/pull/5289.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5289.patch",
"merged_at": null
} | true |
1,462,134,067 | 5,288 | Lossy json serialization - deserialization of dataset info | open | [
"Hi ! JSON is a lossy format indeed. If you want to keep the feature types or other metadata I'd encourage you to store them as well. For example you can use `dataset.info.write_to_directory` and `DatasetInfo.from_directory` to store the feature types, split info, description, license etc."
] | 2022-11-23T17:20:15 | 2022-11-25T12:53:51 | null | ### Describe the bug
Saving a dataset to disk as json (using `to_json`) and then loading it again (using `load_dataset`) results in features whose labels are not type-cast correctly. In the code snippet below, `features.label` should have a label of type `ClassLabel` but has type `Value` instead.
### Steps to re... | anuragprat1k | https://github.com/huggingface/datasets/issues/5288 | null | false |
1,461,971,889 | 5,287 | Fix methods using `IterableDataset.map` that lead to `features=None` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"_The documentation is not available anymore as the PR was closed or merged._",
"Maybe other options are:\r\n* Keep the `info.features` to `None` if those were initially `None`\r\n* Infer the features with pre-fetching just if the `... | 2022-11-23T15:33:25 | 2022-11-28T15:43:14 | 2022-11-28T12:53:22 | As currently `IterableDataset.map` is setting the `info.features` to `None` every time as we don't know the output of the dataset in advance, `IterableDataset` methods such as `rename_column`, `rename_columns`, and `remove_columns`. that internally use `map` lead to the features being `None`.
This PR is related to #... | alvarobartt | https://github.com/huggingface/datasets/pull/5287 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5287",
"html_url": "https://github.com/huggingface/datasets/pull/5287",
"diff_url": "https://github.com/huggingface/datasets/pull/5287.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5287.patch",
"merged_at": "2022-11-28T12:53... | true |
1,461,908,087 | 5,286 | FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json | closed | [
"I found a solution \r\n\r\nIf you specifically install datasets==1.18 and then run\r\n\r\nimport datasets\r\nwiki = datasets.load_dataset('wikipedia', '20200501.en')\r\nthen this should work (it worked for me.)",
"I have the same problem here but installing datasets==1.18 wont work for me\r\n",
"This works wit... | 2022-11-23T14:54:15 | 2024-11-23T01:16:41 | 2022-11-25T11:33:14 | ### Describe the bug
I follow the steps provided on the website [https://huggingface.co/datasets/wikipedia](https://huggingface.co/datasets/wikipedia)
$ pip install apache_beam mwparserfromhell
>>> from datasets import load_dataset
>>> load_dataset("wikipedia", "20220301.en")
however this results in the follo... | roritol | https://github.com/huggingface/datasets/issues/5286 | null | false |
1,461,521,215 | 5,285 | Save file name in embed_storage | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I updated the tests, met le know if it sounds good to you now :)"
] | 2022-11-23T10:55:54 | 2022-11-24T14:11:41 | 2022-11-24T14:08:37 | Having the file name is useful in case we need to check the extension of the file (e.g. mp3), or in general in case it includes some metadata information (track id, image id etc.)
Related to https://github.com/huggingface/datasets/issues/5276 | lhoestq | https://github.com/huggingface/datasets/pull/5285 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5285",
"html_url": "https://github.com/huggingface/datasets/pull/5285",
"diff_url": "https://github.com/huggingface/datasets/pull/5285.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5285.patch",
"merged_at": "2022-11-24T14:08... | true |
1,461,519,733 | 5,284 | Features of IterableDataset set to None by remove column | closed | [
"Related to https://github.com/huggingface/datasets/issues/5245",
"#self-assign",
"Thanks @lhoestq and @alvarobartt!\r\n\r\nThis would be extremely helpful to have working for the Whisper fine-tuning event - we're **only** training using streaming mode, so it'll be quite important to have this feature working t... | 2022-11-23T10:54:59 | 2025-02-07T11:36:41 | 2022-11-28T12:53:24 | ### Describe the bug
The `remove_column` method of the IterableDataset sets the dataset features to None.
### Steps to reproduce the bug
```python
from datasets import Audio, load_dataset
# load LS in streaming mode
dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)
... | sanchit-gandhi | https://github.com/huggingface/datasets/issues/5284 | null | false |
1,460,291,003 | 5,283 | Release: 2.6.2 | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-22T17:36:24 | 2022-11-22T17:50:12 | 2022-11-22T17:47:02 | null | albertvillanova | https://github.com/huggingface/datasets/pull/5283 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5283",
"html_url": "https://github.com/huggingface/datasets/pull/5283",
"diff_url": "https://github.com/huggingface/datasets/pull/5283.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5283.patch",
"merged_at": "2022-11-22T17:47... | true |
1,460,238,928 | 5,282 | Release: 2.7.1 | closed | [] | 2022-11-22T16:58:54 | 2022-11-22T17:21:28 | 2022-11-22T17:21:27 | null | albertvillanova | https://github.com/huggingface/datasets/pull/5282 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5282",
"html_url": "https://github.com/huggingface/datasets/pull/5282",
"diff_url": "https://github.com/huggingface/datasets/pull/5282.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5282.patch",
"merged_at": "2022-11-22T17:21... | true |
1,459,930,271 | 5,281 | Support cloud storage in load_dataset | open | [
"Or for example an archive on GitHub releases! Before I added support for JXL (locally only, PR still pending) I was considering hosting my files on GitHub instead...",
"+1 to this. I would like to use 'audiofolder' with a data_dir that's on S3, for example. I don't want to upload my dataset to the Hub, but I wo... | 2022-11-22T14:00:10 | 2024-11-15T15:03:41 | null | Would be nice to be able to do
```python
data_files=["s3://..."] # or gs:// or any cloud storage path
storage_options = {...}
load_dataset(..., data_files=data_files, storage_options=storage_options)
```
The idea would be to use `fsspec` as in `download_and_prepare` and `save_to_disk`.
This has been reque... | lhoestq | https://github.com/huggingface/datasets/issues/5281 | null | false |
1,459,823,179 | 5,280 | Import error | closed | [
"Hi ! Can you \r\n```python\r\nimport platform\r\nprint(platform.python_version())\r\n```\r\nto see that it returns ?",
"Hi,\n\n3.8.13\n\nGet Outlook for Android<https://aka.ms/AAb9ysg>\n________________________________\nFrom: Quentin Lhoest ***@***.***>\nSent: Tuesday, November 22, 2022 2:37:02 PM\nTo: huggingfa... | 2022-11-22T12:56:43 | 2022-12-15T19:57:40 | 2022-12-15T19:57:40 | https://github.com/huggingface/datasets/blob/cd3d8e637cfab62d352a3f4e5e60e96597b5f0e9/src/datasets/__init__.py#L28
Hy,
I have error at the above line. I have python version 3.8.13, the message says I need python>=3.7, which is True, but I think the if statement not working properly (or the message wrong) | feketedavid1012 | https://github.com/huggingface/datasets/issues/5280 | null | false |
1,459,635,002 | 5,279 | Warn about checksums | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm also in favor of disabling this by default - it's kinda impractical",
"Great, thanks for the quick turnaround on this!"
] | 2022-11-22T10:58:48 | 2022-11-23T11:43:50 | 2022-11-23T09:47:02 | It takes a lot of time on big datasets to compute the checksums, we should at least add a warning to notify the user about this step. I also mentioned how to disable it, and added a tqdm bar (delay=5 seconds)
cc @ola13 | lhoestq | https://github.com/huggingface/datasets/pull/5279 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5279",
"html_url": "https://github.com/huggingface/datasets/pull/5279",
"diff_url": "https://github.com/huggingface/datasets/pull/5279.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5279.patch",
"merged_at": "2022-11-23T09:47... | true |
1,459,574,490 | 5,278 | load_dataset does not read jsonl metadata file properly | closed | [
"Can you try to remove \"drop_labels=false\" ? It may force the loader to infer the labels instead of reading the metadata",
"Hi, thanks for responding. I tried that, but it does not change anything.",
"Can you try updating `datasets` ? Metadata support was added in `datasets` 2.4",
"Probably the issue, will ... | 2022-11-22T10:24:46 | 2023-02-14T14:48:16 | 2022-11-23T11:38:35 | ### Describe the bug
Hi, I'm following [this page](https://huggingface.co/docs/datasets/image_dataset) to create a dataset of images and captions via an image folder and a metadata.json file, but I can't seem to get the dataloader to recognize the "text" column. It just spits out "image" and "label" as features.
B... | 065294847 | https://github.com/huggingface/datasets/issues/5278 | null | false |
1,459,388,551 | 5,277 | Remove YAML integer keys from class_label metadata | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Also note that this approach is valid when metadata keys are str, but also if they are int.\r\n- This will be helpful for any community dataset using old integer keys in their metadata",
"perfect !"
] | 2022-11-22T08:34:07 | 2022-11-22T13:58:26 | 2022-11-22T13:55:49 | Fix partially #5275. | albertvillanova | https://github.com/huggingface/datasets/pull/5277 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5277",
"html_url": "https://github.com/huggingface/datasets/pull/5277",
"diff_url": "https://github.com/huggingface/datasets/pull/5277.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5277.patch",
"merged_at": "2022-11-22T13:55... | true |
1,459,363,442 | 5,276 | Bug in downloading common_voice data and snall chunk of it to one's own hub | closed | [
"Sounds like one of the file is not a valid one, can you make sure you uploaded valid mp3 files ?",
"Well I just sharded the original commonVoice dataset and pushed a small chunk of it in a private rep\n\nWhat did go wrong?\n\nHolen Sie sich Outlook fΓΌr iOS<https://aka.ms/o0ukef>\n________________________________... | 2022-11-22T08:17:53 | 2023-07-21T14:33:10 | 2023-07-21T14:33:10 | ### Describe the bug
I'm trying to load the common voice dataset. Currently there is no implementation to download just par tof the data, and I need just one part of it, without downloading the entire dataset
Help please?
 https://github.com/huggingface/moon-landing/pull/4609",
"FYI there are still 2k+ weekly users on `datasets` 2.6.1 which doesn't support the string label format for... | 2022-11-22T08:14:47 | 2023-01-26T10:52:35 | 2023-01-26T10:40:21 | After an internal discussion (https://github.com/huggingface/moon-landing/issues/4563):
- YAML integer keys are not preserved server-side: they are transformed to strings
- See for example this Hub PR: https://huggingface.co/datasets/acronym_identification/discussions/1/files
- Original:
```yaml
... | albertvillanova | https://github.com/huggingface/datasets/issues/5275 | null | false |
1,458,646,455 | 5,274 | load_dataset possibly broken for gated datasets? | closed | [
"@BradleyHsu",
"Btw, thanks very much for finding the hub rollback temporary fix and bringing the issue to our attention @KhoomeiK!",
"I see the same issue when calling `load_dataset('poloclub/diffusiondb', 'large_random_1k')` with `datasets==2.7.1` and `huggingface-hub=0.11.0`. No issue with `datasets=2.6.1` a... | 2022-11-21T21:59:53 | 2023-05-27T00:06:14 | 2022-11-28T02:50:42 | ### Describe the bug
When trying to download the [winoground dataset](https://huggingface.co/datasets/facebook/winoground), I get this error unless I roll back the version of huggingface-hub:
```
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in validate_rep... | TristanThrush | https://github.com/huggingface/datasets/issues/5274 | null | false |
1,458,018,050 | 5,273 | download_mode="force_redownload" does not refresh cached dataset | open | [] | 2022-11-21T14:12:43 | 2022-11-21T14:13:03 | null | ### Describe the bug
`load_datasets` does not refresh dataset when features are imported from external file, even with `download_mode="force_redownload"`. The bug is not limited to nested fields, however it is more likely to occur with nested fields.
### Steps to reproduce the bug
To reproduce the bug 3 files are ne... | nomisto | https://github.com/huggingface/datasets/issues/5273 | null | false |
1,456,940,021 | 5,272 | Use pyarrow Tensor dtype | open | [
"Hi ! We're using the Arrow format for the datasets, and PyArrow tensors are not part of the Arrow format AFAIK:\r\n\r\n> There is no direct support in the arrow columnar format to store Tensors as column values.\r\n\r\nsource: https://github.com/apache/arrow/issues/4802#issuecomment-508494694",
"@wesm @rok its b... | 2022-11-20T15:18:41 | 2024-11-11T03:03:17 | null | ### Feature request
I was going the discussion of converting tensors to lists.
Is there a way to leverage pyarrow's Tensors for nested arrays / embeddings?
For example:
```python
import pyarrow as pa
import numpy as np
x = np.array([[2, 2, 4], [4, 5, 100]], np.int32)
pa.Tensor.from_numpy(x, dim_names=["dim1... | franz101 | https://github.com/huggingface/datasets/issues/5272 | null | false |
1,456,807,738 | 5,271 | Fix #5269 | closed | [
"See <https://github.com/huggingface/datasets/issues/5269>"
] | 2022-11-20T07:50:49 | 2022-11-21T15:07:19 | 2022-11-21T15:06:38 | ```
$ datasets-cli convert --datasets_directory <TAB>
datasets_directory
benchmarks/ docs/ metrics/ notebooks/ src/ templates/ tests/ utils/
```
| Freed-Wu | https://github.com/huggingface/datasets/pull/5271 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5271",
"html_url": "https://github.com/huggingface/datasets/pull/5271",
"diff_url": "https://github.com/huggingface/datasets/pull/5271.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5271.patch",
"merged_at": null
} | true |
1,456,508,990 | 5,270 | When len(_URLS) > 16, download will hang | open | [
"It can fix the bug temporarily.\r\n```python\r\nfrom datasets import DownloadConfig\r\nconfig = DownloadConfig(num_proc=8)\r\nIn [5]: dataset = load_dataset('Freed-Wu/kodak', split='test', download_config=config)\r\nDownloading and preparing dataset kodak/default to /home/wzy/.cache/huggingface/datasets/Freed-Wu__... | 2022-11-19T14:27:41 | 2022-11-21T15:27:16 | null | ### Describe the bug
```python
In [9]: dataset = load_dataset('Freed-Wu/kodak', split='test')
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.53k/2.53k [00:00<00:00, 1.88MB/s]
[1... | Freed-Wu | https://github.com/huggingface/datasets/issues/5270 | null | false |
1,456,485,799 | 5,269 | Shell completions | closed | [
"I don't think we need completion on the datasets-cli, since we're mainly developing huggingface-cli",
"I see."
] | 2022-11-19T13:48:59 | 2022-11-21T15:06:15 | 2022-11-21T15:06:14 | ### Feature request
Like <https://github.com/huggingface/huggingface_hub/issues/1197>, datasets-cli maybe need it, too.
### Motivation
See above.
### Your contribution
Maybe. | Freed-Wu | https://github.com/huggingface/datasets/issues/5269 | null | false |
1,455,633,978 | 5,268 | Sharded save_to_disk + multiprocessing | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Added both num_shards and max_shard_size in push_to_hub/save_to_disk. Will take care of updating the tests later",
"It's ready for a final review @mariosasko and @albertvillanova, let me know what you think :)",
"Took your commen... | 2022-11-18T18:50:01 | 2022-12-14T18:25:52 | 2022-12-14T18:22:58 | Added `num_shards=` and `num_proc=` to `save_to_disk()`
EDIT: also added `max_shard_size=` to `save_to_disk()`, and also `num_shards=` to `push_to_hub`
I also:
- deprecated the fs parameter in favor of storage_options (for consistency with the rest of the lib) in save_to_disk and load_from_disk
- always embed t... | lhoestq | https://github.com/huggingface/datasets/pull/5268 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5268",
"html_url": "https://github.com/huggingface/datasets/pull/5268",
"diff_url": "https://github.com/huggingface/datasets/pull/5268.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5268.patch",
"merged_at": "2022-12-14T18:22... | true |
1,455,466,464 | 5,267 | Fix `max_shard_size` docs | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-18T16:55:22 | 2022-11-18T17:28:58 | 2022-11-18T17:25:27 | null | lhoestq | https://github.com/huggingface/datasets/pull/5267 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5267",
"html_url": "https://github.com/huggingface/datasets/pull/5267",
"diff_url": "https://github.com/huggingface/datasets/pull/5267.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5267.patch",
"merged_at": "2022-11-18T17:25... | true |
1,455,281,310 | 5,266 | Specify arguments as keywords in librosa.reshape to avoid future errors | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-18T14:58:47 | 2022-11-21T15:45:02 | 2022-11-21T15:41:57 | Fixes a warning and future deprecation from `librosa.reshape`:
```
FutureWarning: Pass orig_sr=16000, target_sr=48000 as keyword args. From version 0.10 passing these as positional arguments will result in an error
array = librosa.resample(array, sampling_rate, self.sampling_rate, res_type="kaiser_best")
``` | polinaeterna | https://github.com/huggingface/datasets/pull/5266 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5266",
"html_url": "https://github.com/huggingface/datasets/pull/5266",
"diff_url": "https://github.com/huggingface/datasets/pull/5266.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5266.patch",
"merged_at": "2022-11-21T15:41... | true |
1,455,274,864 | 5,265 | Get an IterableDataset from a map-style Dataset | closed | [
"I think `stream` could be misleading since the data is not being streamed from remote endpoints (one could think that's the case when they see `load_dataset` followed by `stream`). Hence, I prefer the second option.\r\n\r\nPS: When we resolve https://github.com/huggingface/datasets/issues/4542, we could add `as_tf... | 2022-11-18T14:54:40 | 2023-02-01T16:36:03 | 2023-02-01T16:36:03 | This is useful to leverage iterable datasets specific features like:
- fast approximate shuffling
- lazy map, filter etc.
Iterating over the resulting iterable dataset should be at least as fast at iterating over the map-style dataset.
Here are some ideas regarding the API:
```python
# 1.
# - consistency wi... | lhoestq | https://github.com/huggingface/datasets/issues/5265 | null | false |
1,455,252,906 | 5,264 | `datasets` can't read a Parquet file in Python 3.9.13 | closed | [
"Could you share the full stack trace please ?\r\n\r\n\r\nCan you also try running this code ? It can be useful to determine if the issue comes from `datasets` or `fsspec` (streaming) or `pyarrow` (parquet reading):\r\n```python\r\nds = load_dataset(\"parquet\", data_files=a_parquet_file_url, use_auth_token=True)\r... | 2022-11-18T14:44:01 | 2023-05-07T09:52:59 | 2022-11-22T11:18:08 | ### Describe the bug
I have an error when trying to load this [dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj) (it's private but I can add you to the bigcode org). `datasets` can't read one of the parquet files in the Java subset
```python
from datasets import load_dataset
ds = load_data... | loubnabnl | https://github.com/huggingface/datasets/issues/5264 | null | false |
1,455,252,626 | 5,263 | Save a dataset in a determined number of shards | closed | [] | 2022-11-18T14:43:54 | 2022-12-14T18:22:59 | 2022-12-14T18:22:59 | This is useful to distribute the shards to training nodes.
This can be implemented in `save_to_disk` and can also leverage multiprocessing to speed up the process | lhoestq | https://github.com/huggingface/datasets/issues/5263 | null | false |
1,455,171,100 | 5,262 | AttributeError: 'Value' object has no attribute 'names' | closed | [
"Hi ! It looks like your \"isDif\" column is a Sequence of Value(\"string\"), not a Sequence of ClassLabel.\r\n\r\nYou can convert your Value(\"string\") feature type to a ClassLabel feature type this way:\r\n```python\r\nfrom datasets import ClassLabel, Sequence\r\n\r\n# provide the label_names yourself\r\nlabel_n... | 2022-11-18T13:58:42 | 2022-11-22T10:09:24 | 2022-11-22T10:09:23 | Hello
I'm trying to build a model for custom token classification
I already followed the token classification course on huggingface
while adapting the code to my work, this message occures :
'Value' object has no attribute 'names'
Here's my code:
`raw_datasets`
generates
DatasetDict({
train: Datas... | emnaboughariou | https://github.com/huggingface/datasets/issues/5262 | null | false |
1,454,647,861 | 5,261 | Add PubTables-1M | open | [
"cc @albertvillanova the author would like to add this dataset to the hub: https://github.com/microsoft/table-transformer/issues/68#issuecomment-1319114621. Could you help him out?"
] | 2022-11-18T07:56:36 | 2022-11-18T08:02:18 | null | ### Name
PubTables-1M
### Paper
https://openaccess.thecvf.com/content/CVPR2022/html/Smock_PubTables-1M_Towards_Comprehensive_Table_Extraction_From_Unstructured_Documents_CVPR_2022_paper.html
### Data
https://github.com/microsoft/table-transformer
### Motivation
Table Transformer is now available in π€ Transforme... | NielsRogge | https://github.com/huggingface/datasets/issues/5261 | null | false |
1,453,921,697 | 5,260 | consumer-finance-complaints dataset not loading | open | [
"Thanks for reporting, @adiprasad.\r\n\r\nWe are having a look at it.",
"I have opened an issue in that dataset Community tab on the Hub: https://huggingface.co/datasets/consumer-finance-complaints/discussions/1\r\n\r\nPlease note that in the meantime, you can load the dataset by passing `ignore_verifications=Tru... | 2022-11-17T20:10:26 | 2022-11-18T10:16:53 | null | ### Describe the bug
Error during dataset loading
### Steps to reproduce the bug
```
>>> import datasets
>>> cf_raw = datasets.load_dataset("consumer-finance-complaints")
Downloading builder script: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ... | adiprasad | https://github.com/huggingface/datasets/issues/5260 | null | false |
1,453,555,923 | 5,259 | datasets 2.7 introduces sharding error | closed | [
"I notice a comment in the code says:\r\n`Having lists of different sizes makes sharding ambigious, raise an error in this case until we decide how to define sharding without ambiguity for users` \r\n \r\n ... which suggests this update was pushed knowing that it might break some things. But, it didn't seem to h... | 2022-11-17T15:36:52 | 2022-12-24T01:44:02 | 2022-11-18T12:52:05 | ### Describe the bug
dataset fails to load with runtime error
`RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize:
- key audio_files has length 46
- key data has length 0
To fix this, check the ... | DCNemesis | https://github.com/huggingface/datasets/issues/5259 | null | false |
1,453,516,636 | 5,258 | Restore order of split names in dataset_info for canonical datasets | closed | [
"The bulk edit is running...\r\n\r\nSee for example: \r\n- A single config: https://huggingface.co/datasets/acronym_identification/discussions/2\r\n- Multiple configs: https://huggingface.co/datasets/babi_qa/discussions/1",
"TODO: Add \"dataset_info\" YAML metadata to:\r\n- [x] \"chr_en\" has no metadata JSON fil... | 2022-11-17T15:13:15 | 2023-02-16T09:49:05 | 2022-11-19T06:51:37 | After a bulk edit of canonical datasets to create the YAML `dataset_info` metadata, the split names were accidentally sorted alphabetically. See for example:
- https://huggingface.co/datasets/bc2gm_corpus/commit/2384629484401ecf4bb77cd808816719c424e57c
Note that this order is the one appearing in the preview of the... | albertvillanova | https://github.com/huggingface/datasets/issues/5258 | null | false |
1,452,656,891 | 5,257 | remove an unused statement | closed | [] | 2022-11-17T04:00:50 | 2022-11-18T11:04:08 | 2022-11-18T11:04:08 | remove the unused statement: `input_pairs = list(zip())` | WrRan | https://github.com/huggingface/datasets/pull/5257 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5257",
"html_url": "https://github.com/huggingface/datasets/pull/5257",
"diff_url": "https://github.com/huggingface/datasets/pull/5257.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5257.patch",
"merged_at": "2022-11-18T11:04... | true |
1,452,652,586 | 5,256 | fix wrong print | closed | [] | 2022-11-17T03:54:26 | 2022-11-18T11:05:32 | 2022-11-18T11:05:32 | print `encoded_dataset.column_names` not `dataset.column_names` | WrRan | https://github.com/huggingface/datasets/pull/5256 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5256",
"html_url": "https://github.com/huggingface/datasets/pull/5256",
"diff_url": "https://github.com/huggingface/datasets/pull/5256.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5256.patch",
"merged_at": "2022-11-18T11:05... | true |
1,452,631,517 | 5,255 | Add a Depth Estimation dataset - DIODE / NYUDepth / KITTI | closed | [
"Also cc @mariosasko and @lhoestq ",
"Cool ! Let us know if you have questions or if we can help :)\r\n\r\nI guess we'll also have to create the NYU CS Department on the Hub ?",
"> I guess we'll also have to create the NYU CS Department on the Hub ?\r\n\r\nYes, you're right! Let me add it to my profile first, a... | 2022-11-17T03:22:22 | 2022-12-17T12:20:38 | 2022-12-17T12:20:37 | ### Name
NYUDepth
### Paper
http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf
### Data
https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html
### Motivation
Depth estimation is an important problem in computer vision. We have a couple of Depth Estimation models on Hub as well:
* [GLPN... | sayakpaul | https://github.com/huggingface/datasets/issues/5255 | null | false |
1,452,600,088 | 5,254 | typo | closed | [] | 2022-11-17T02:39:57 | 2022-11-18T10:53:45 | 2022-11-18T10:53:45 | null | WrRan | https://github.com/huggingface/datasets/pull/5254 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5254",
"html_url": "https://github.com/huggingface/datasets/pull/5254",
"diff_url": "https://github.com/huggingface/datasets/pull/5254.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5254.patch",
"merged_at": "2022-11-18T10:53... | true |
1,452,588,206 | 5,253 | typo | closed | [] | 2022-11-17T02:22:58 | 2022-11-18T10:53:11 | 2022-11-18T10:53:10 | null | WrRan | https://github.com/huggingface/datasets/pull/5253 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5253",
"html_url": "https://github.com/huggingface/datasets/pull/5253",
"diff_url": "https://github.com/huggingface/datasets/pull/5253.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5253.patch",
"merged_at": "2022-11-18T10:53... | true |
1,451,765,838 | 5,252 | Support for decoding Image/Audio types in map when format type is not default one | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5252). All of your documentation changes will be reflected on that endpoint.",
"Yes, if the image column is the first in the batch keys, it will ... | 2022-11-16T15:02:13 | 2022-12-13T17:01:54 | 2022-12-13T16:59:04 | Add support for decoding the `Image`/`Audio` types in `map` for the formats (Numpy, TF, Jax, PyTorch) other than the default one (Python).
Additional improvements:
* make `Dataset`'s "iter" API cleaner by removing `_iter` and replacing `_iter_batches` with `iter(batch_size)` (also implemented for `IterableDataset`... | mariosasko | https://github.com/huggingface/datasets/pull/5252 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5252",
"html_url": "https://github.com/huggingface/datasets/pull/5252",
"diff_url": "https://github.com/huggingface/datasets/pull/5252.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5252.patch",
"merged_at": "2022-12-13T16:59... | true |
1,451,761,321 | 5,251 | Docs are not generated after latest release | closed | [
"After a discussion with @mishig25:\r\n- He said that this action should be triggered if we call our release branch according to the regex `v*-release`, as transformers does\r\n- I said that our procedure is different: our release branch is *temporary* and it is deleted just after the release PR is merged to main\r... | 2022-11-16T14:59:31 | 2022-11-22T16:27:50 | 2022-11-22T16:27:50 | After the latest `datasets` release version 0.7.0, the docs were not generated.
As we have changed the release procedure (so that now we do not push directly to main branch), maybe we should also change the corresponding GitHub action:
https://github.com/huggingface/datasets/blob/edf1902f954c5568daadebcd8754bdad4... | albertvillanova | https://github.com/huggingface/datasets/issues/5251 | null | false |
1,451,720,030 | 5,250 | Change release procedure to use only pull requests | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5250). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface... | 2022-11-16T14:35:32 | 2022-11-22T16:30:58 | 2022-11-22T16:27:48 | This PR changes the release procedure so that:
- it only make changes to main branch via pull requests
- it is no longer necessary to directly commit/push to main branch
Close #5251.
| albertvillanova | https://github.com/huggingface/datasets/pull/5250 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5250",
"html_url": "https://github.com/huggingface/datasets/pull/5250",
"diff_url": "https://github.com/huggingface/datasets/pull/5250.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5250.patch",
"merged_at": "2022-11-22T16:27... | true |
1,451,692,247 | 5,249 | Protect the main branch from inadvertent direct pushes | closed | [
"It seems all the tasks have been addressed, meaning this issue can be closed, no?"
] | 2022-11-16T14:19:03 | 2023-12-21T10:28:27 | 2023-12-21T10:28:26 | We have decided to implement a protection mechanism in this repository, so that nobody (not even administrators) can inadvertently push accidentally directly to the main branch.
See context here:
- d7c942228b8dcf4de64b00a3053dce59b335f618
To do:
- [x] Protect main branch
- Settings > Branches > Branch protec... | albertvillanova | https://github.com/huggingface/datasets/issues/5249 | null | false |
1,451,338,676 | 5,248 | Complete doc migration | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5248). All of your documentation changes will be reflected on that endpoint.",
"Thanks for the fix @mishig25.\r\n\r\nI guess this is the reason why the docs are not generated for the latest release version 2.7.0? https://huggin... | 2022-11-16T10:41:04 | 2022-11-16T15:06:50 | 2022-11-16T10:41:10 | Reverts huggingface/datasets#5214
Everything is handled on the doc-builder side now π | mishig25 | https://github.com/huggingface/datasets/pull/5248 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5248",
"html_url": "https://github.com/huggingface/datasets/pull/5248",
"diff_url": "https://github.com/huggingface/datasets/pull/5248.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5248.patch",
"merged_at": "2022-11-16T10:41... | true |
1,451,297,749 | 5,247 | Set dev version | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5247). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-16T10:17:31 | 2022-11-16T10:22:20 | 2022-11-16T10:17:50 | null | albertvillanova | https://github.com/huggingface/datasets/pull/5247 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5247",
"html_url": "https://github.com/huggingface/datasets/pull/5247",
"diff_url": "https://github.com/huggingface/datasets/pull/5247.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5247.patch",
"merged_at": "2022-11-16T10:17... | true |
1,451,226,055 | 5,246 | Release: 2.7.0 | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-16T09:32:44 | 2022-11-16T09:39:42 | 2022-11-16T09:37:03 | null | albertvillanova | https://github.com/huggingface/datasets/pull/5246 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5246",
"html_url": "https://github.com/huggingface/datasets/pull/5246",
"diff_url": "https://github.com/huggingface/datasets/pull/5246.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5246.patch",
"merged_at": "2022-11-16T09:37... | true |
1,450,376,433 | 5,245 | Unable to rename columns in streaming dataset | closed | [
"Hi @peregilk this bug is directly related to https://github.com/huggingface/datasets/issues/3888, and still not fixed... But I'll try to have a look!",
"Thanks @alvarobartt. It is great if you are able to fix it, but when reading the explanation it seems like it is possible to work around it.\r\n\r\nWe also trie... | 2022-11-15T21:04:41 | 2022-11-28T12:53:24 | 2022-11-28T12:53:24 | ### Describe the bug
Trying to rename column in a streaming datasets, destroys the features object.
### Steps to reproduce the bug
The following code illustrates the error:
```
from datasets import load_dataset
dataset = load_dataset('mc4', 'en', streaming=True, split='train')
dataset.info.features
# {'text':... | peregilk | https://github.com/huggingface/datasets/issues/5245 | null | false |
1,450,019,225 | 5,244 | Allow dataset streaming from private a private source when loading a dataset with a dataset loading script | open | [
"Hi ! What kind of private source ? We're exploring adding support for cloud storage and URIs like s3://, gs:// etc. with authentication in the download manager",
"Hello! It's a google cloud storage, so gs://, but I'm using it with https.\r\nBeing able to provide a file system like [here](https://huggingface.co/d... | 2022-11-15T16:02:10 | 2022-11-23T14:02:30 | null | ### Feature request
Add arguments to the function _get_authentication_headers_for_url_ like custom_endpoint and custom_token in order to add flexibility when downloading files from a private source.
It should also be possible to provide these arguments from the dataset loading script, maybe giving them to the dl_... | bruno-hays | https://github.com/huggingface/datasets/issues/5244 | null | false |
1,449,523,962 | 5,243 | Download only split data | open | [
"Hi @capsabogdan! Unfortunately, it's hard to implement because quite often datasets data is being hosted in a single archive for all splits :( So we have to download the whole archive to split it into splits. This is the case for CommonVoice too. \r\n\r\nHowever, for cases when data is distributed in separate arch... | 2022-11-15T10:15:54 | 2025-02-25T14:47:03 | null | ### Feature request
Is it possible to download only the data that I am requesting and not the entire dataset? I run out of disk spaceas it seems to download the entire dataset, instead of only the part needed.
common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test",
... | capsabogdan | https://github.com/huggingface/datasets/issues/5243 | null | false |
1,449,069,382 | 5,242 | Failed Data Processing upon upload with zip file full of images | open | [
"cc @abhishekkrthakur @SBrandeis "
] | 2022-11-15T02:47:52 | 2022-11-15T17:59:23 | null | I went to autotrain and under image classification arrived where it was time to prepare my dataset. Screenshot below

I chose the method 2 option. I have a csv file with two columns. ~23,000 files.
I... | scrambled2 | https://github.com/huggingface/datasets/issues/5242 | null | false |
1,448,510,407 | 5,241 | Support hfh rc version | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-14T18:05:47 | 2022-11-15T16:11:30 | 2022-11-15T16:09:31 | otherwise the code doesn't work for hfh 0.11.0rc0
following #5237 | lhoestq | https://github.com/huggingface/datasets/pull/5241 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5241",
"html_url": "https://github.com/huggingface/datasets/pull/5241",
"diff_url": "https://github.com/huggingface/datasets/pull/5241.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5241.patch",
"merged_at": "2022-11-15T16:09... | true |
1,448,478,617 | 5,240 | Cleaner error tracebacks for dataset script errors | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq Good catch! This currently leads to an AttributeError (due to `writer` being None) on this line:\r\nhttps://github.com/huggingface/datasets/blob/fed1628d49a91f9ae259ddf6edbb252c7972d9a3/src/datasets/builder.py#L1552\r\n"
] | 2022-11-14T17:42:02 | 2022-11-15T18:26:48 | 2022-11-15T18:24:38 | Make the traceback of the errors raised in `_generate_examples` cleaner for easier debugging. Additionally, initialize the `writer` in the for-loop to avoid the `ValueError` from `ArrowWriter.finalize` raised in the `finally` block when no examples are yielded before the `_generate_examples` error.
<details>
<s... | mariosasko | https://github.com/huggingface/datasets/pull/5240 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5240",
"html_url": "https://github.com/huggingface/datasets/pull/5240",
"diff_url": "https://github.com/huggingface/datasets/pull/5240.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5240.patch",
"merged_at": "2022-11-15T18:24... | true |
1,448,211,373 | 5,239 | Add num_proc to from_csv/generator/json/parquet/text | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5239). All of your documentation changes will be reflected on that endpoint.",
"I ended up moving `num_proc` to `AbstractDatasetReader.__init__` :)\r\n\r\nLet me know if it sounds good to you now"
] | 2022-11-14T14:53:00 | 2022-12-06T15:39:10 | 2022-12-06T15:39:09 | Allow multiprocessing to from_* methods | lhoestq | https://github.com/huggingface/datasets/pull/5239 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5239",
"html_url": "https://github.com/huggingface/datasets/pull/5239",
"diff_url": "https://github.com/huggingface/datasets/pull/5239.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5239.patch",
"merged_at": "2022-12-06T15:39... | true |
1,448,211,251 | 5,238 | Make `Version` hashable | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-14T14:52:55 | 2022-11-14T15:30:02 | 2022-11-14T15:27:35 | Add `__hash__` to the `Version` class to make it hashable (and remove the unneeded methods), as `Version("0.0.0")` is the default value of `BuilderConfig.version` and the default fields of a dataclass need to be hashable in Python 3.11.
Fix https://github.com/huggingface/datasets/issues/5230 | mariosasko | https://github.com/huggingface/datasets/pull/5238 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5238",
"html_url": "https://github.com/huggingface/datasets/pull/5238",
"diff_url": "https://github.com/huggingface/datasets/pull/5238.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5238.patch",
"merged_at": "2022-11-14T15:27... | true |
1,448,202,491 | 5,237 | Encode path only for old versions of hfh | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-14T14:46:57 | 2022-11-14T17:38:18 | 2022-11-14T17:35:59 | Next version of `huggingface-hub` 0.11 does encode the `path`, and we don't want to encode twice | lhoestq | https://github.com/huggingface/datasets/pull/5237 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5237",
"html_url": "https://github.com/huggingface/datasets/pull/5237",
"diff_url": "https://github.com/huggingface/datasets/pull/5237.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5237.patch",
"merged_at": "2022-11-14T17:35... | true |
1,448,190,801 | 5,236 | Handle ArrowNotImplementedError caused by try_type being Image or Audio in cast | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Not sure how we can have a test that is relevant for this though - feel free to add one if you have ideas\r\n\r\nYes, this was my reasoning for not adding a test. This change is pretty simple, so I think it's OK not to have a test ... | 2022-11-14T14:38:59 | 2022-11-14T16:04:29 | 2022-11-14T16:01:48 | Handle the `ArrowNotImplementedError` thrown when `try_type` is `Image` or `Audio` and the input array cannot be converted to their storage formats.
Reproducer:
```python
from datasets import Dataset
from PIL import Image
import requests
ds = Dataset.from_dict({"image": [Image.open(requests.get("https://uploa... | mariosasko | https://github.com/huggingface/datasets/pull/5236 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5236",
"html_url": "https://github.com/huggingface/datasets/pull/5236",
"diff_url": "https://github.com/huggingface/datasets/pull/5236.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5236.patch",
"merged_at": "2022-11-14T16:01... | true |
1,448,052,660 | 5,235 | Pin `typer` version in tests to <0.5 to fix Windows CI | closed | [] | 2022-11-14T13:17:02 | 2022-11-14T15:43:01 | 2022-11-14T13:41:12 | Otherwise `click` fails on Windows:
```
Traceback (most recent call last):
File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\runpy.py", line 85, in _run_code
exec(code, run_glob... | polinaeterna | https://github.com/huggingface/datasets/pull/5235 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5235",
"html_url": "https://github.com/huggingface/datasets/pull/5235",
"diff_url": "https://github.com/huggingface/datasets/pull/5235.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5235.patch",
"merged_at": "2022-11-14T13:41... | true |
1,447,999,062 | 5,234 | fix: dataset path should be absolute | closed | [
"Good catch thanks ! Have you tried to use the absolue path in `MemoryMappedTable.__init__` in `table.py`?\r\n\r\nI think it can fix issues with relative paths at more levels than just fixing it `load_from_disk`. If it works I think it would be a more robust fix to this issue",
"@lhoestq right, that actually fixe... | 2022-11-14T12:47:40 | 2022-12-07T23:49:22 | 2022-12-07T23:46:34 | cache_file_name depends on dataset's path.
A simple way where this could cause a problem:
```
import os
import datasets
def add_prefix(example):
example["text"] = "Review: " + example["text"]
return example
ds = datasets.load_from_disk("a/relative/path")
os.chdir("/tmp")
ds_1 = ds.map(add_... | vigsterkr | https://github.com/huggingface/datasets/pull/5234 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5234",
"html_url": "https://github.com/huggingface/datasets/pull/5234",
"diff_url": "https://github.com/huggingface/datasets/pull/5234.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5234.patch",
"merged_at": "2022-12-07T23:46... | true |
1,447,906,868 | 5,233 | Fix shards in IterableDataset.from_generator | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-14T11:42:09 | 2022-11-14T14:16:03 | 2022-11-14T14:13:22 | Allow to define a sharded iterable dataset | lhoestq | https://github.com/huggingface/datasets/pull/5233 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5233",
"html_url": "https://github.com/huggingface/datasets/pull/5233",
"diff_url": "https://github.com/huggingface/datasets/pull/5233.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5233.patch",
"merged_at": "2022-11-14T14:13... | true |
1,446,294,165 | 5,232 | Incompatible dill versions in datasets 2.6.1 | closed | [
"Thanks for reporting, @vinaykakade.\r\n\r\nWe are discussing about making a release early this week.\r\n\r\nPlease note that in the meantime, in your specific case (as we also pointed out here: https://github.com/huggingface/datasets/issues/5162#issuecomment-1291720293), you can circumvent the issue by pinning `mu... | 2022-11-12T06:46:23 | 2022-11-14T08:24:43 | 2022-11-14T08:07:59 | ### Describe the bug
datasets version 2.6.1 has a dependency on dill<0.3.6. This causes a conflict with dill>=0.3.6 used by multiprocess dependency in datasets 2.6.1
This issue is already fixed in https://github.com/huggingface/datasets/pull/5166/files, but not yet been released. Please release a new version of the... | vinaykakade | https://github.com/huggingface/datasets/issues/5232 | null | false |
1,445,883,267 | 5,231 | Using `set_format(type='torch', columns=columns)` makes Array2D/3D columns stop formatting correctly | closed | [
"In case others find this, the problem was not with set_format, but my usages of `to_pandas()` and `from_pandas()` which I was using during dataset splitting; somewhere in the chain of converting to and from pandas the `Array2D/Array3D` types get converted to series of `Sequence()` types"
] | 2022-11-11T18:54:36 | 2022-11-11T20:42:29 | 2022-11-11T18:59:50 | I have a Dataset with two Features defined as follows:
```
'image': Array3D(dtype="int64", shape=(3, 224, 224)),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
```
On said dataset, if I `dataset.set_format(type='torch')` and then use the dataset in a dataloader, these columns are correctly cast to Tensors of ... | plamb-viso | https://github.com/huggingface/datasets/issues/5231 | null | false |
1,445,507,580 | 5,230 | dataclasses error when importing the library in python 3.11 | closed | [
"I opened [this issue](https://github.com/python/cpython/issues/99401).\r\nPython's maintainers say that the issue is caused by [this change](https://docs.python.org/3.11/whatsnew/3.11.html#dataclasses).\r\nI believe adding a `__hash__` method to `datasets.utils.version.Version` should solve (at least partially) th... | 2022-11-11T13:53:49 | 2023-05-25T04:37:05 | 2022-11-14T15:27:37 | ### Describe the bug
When I import datasets using python 3.11 the dataclasses standard library raises the following error:
`ValueError: mutable default <class 'datasets.utils.version.Version'> for field version is not allowed: use default_factory`
When I tried to import the library using the following jupyter note... | yonikremer | https://github.com/huggingface/datasets/issues/5230 | null | false |
1,445,121,028 | 5,229 | Type error when calling `map` over dataset containing 0-d tensors | closed | [
"Hi! \r\n\r\nWe could address this by calling `.item()` on such tensors to extract the value, but this would lose us the type, which could lead to storing the generated dataset in a suboptimal format. Considering this, I think the only proper fix would be implementing support for 0-D tensors on Apache Arrow's side ... | 2022-11-11T08:27:28 | 2023-01-13T16:00:53 | 2023-01-13T16:00:53 | ### Describe the bug
0-dimensional tensors in a dataset lead to `TypeError: iteration over a 0-d array` when calling `map`. It is easy to generate such tensors by using `.with_format("...")` on the whole dataset.
### Steps to reproduce the bug
```
ds = datasets.Dataset.from_list([{"a": 1}, {"a": 1}]).with_fo... | phipsgabler | https://github.com/huggingface/datasets/issues/5229 | null | false |
1,444,763,105 | 5,228 | Loading a dataset from the hub fails if you happen to have a folder of the same name | open | [
"`load_dataset` first checks for a local directory before checking for the Hub.\r\n\r\nTo make it explicit that it has to fetch the Hub, we could support the `hffs` syntax:\r\n```python\r\nload_dataset(\"hf://datasets/glue\")\r\n```\r\n\r\nwould that work for you ? Also cc @mariosasko who's leading the `hffs` proje... | 2022-11-11T00:51:54 | 2023-05-03T23:23:04 | null | ### Describe the bug
I'm not 100% sure this should be considered a bug, but it was certainly annoying to figure out the cause of. And perhaps I am just missing a specific argument needed to avoid this conflict. Basically I had a situation where multiple workers were downloading different parts of the glue dataset and ... | dakinggg | https://github.com/huggingface/datasets/issues/5228 | null | false |
1,444,620,094 | 5,227 | datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files | closed | [
"Fixed. Please close.",
"how to fix?i need your help"
] | 2022-11-10T21:57:06 | 2023-10-07T05:04:41 | 2022-11-10T22:05:43 | ### Describe the bug
From these lines:
from datasets import list_datasets, load_dataset
dataset = load_dataset("wikisql","binary")
I get error message:
datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files
And yet the 'wikisql' is reported to exist via the list_datas... | ScottM-wizard | https://github.com/huggingface/datasets/issues/5227 | null | false |
1,444,385,148 | 5,226 | Q: Memory release when removing the column? | closed | [
"Hi ! Datasets are memory mapped from your disk, i.e. they're not loaded in RAM. This is possible thanks to the Arrow data format.\r\n\r\nTherefore the column you remove is not in RAM, so removing it doesn't cause the RAM to decrease.",
"Thanks for the explanation! @lhoestq \r\nI wonder since it is memory mapped,... | 2022-11-10T18:35:27 | 2022-11-29T15:10:10 | 2022-11-29T15:10:10 | ### Describe the bug
How do I release memory when I use methods like `.remove_columns()` or `clear()` in notebooks?
```python
from datasets import load_dataset
common_voice = load_dataset("mozilla-foundation/common_voice_11_0", "ja", use_auth_token=True)
# check memory -> RAM Used (GB): 0.704 / Total (GB) 33.670... | bayartsogt-ya | https://github.com/huggingface/datasets/issues/5226 | null | false |
1,444,305,183 | 5,225 | Add video feature | open | [
"@NielsRogge @rwightman may have additional requirements regarding this feature.\r\n\r\nWhen adding a new (decodable) type, the hardest part is choosing the right decoding library. What I mean by \"right\" here is that it has all the features we need and is easy to install (with GPU support?).\r\n\r\nSome candidate... | 2022-11-10T17:36:11 | 2022-12-02T15:13:15 | null | ### Feature request
Add a `Video` feature to the library so folks can include videos in their datasets.
### Motivation
Being able to load Video data would be quite helpful. However, there are some challenges when it comes to videos:
1. Videos, unlike images, can end up being extremely large files
2. Often times ... | nateraw | https://github.com/huggingface/datasets/issues/5225 | null | false |
1,443,640,867 | 5,224 | Seems to freeze when loading audio dataset with wav files from local folder | closed | [
"I just tried to do the same but changing the `.wav` files to `.mp3` files and that doesn't fix it.",
"I don't know if anyone will ever read this but I've tried to upload the same dataset with google colab and the output seems more clarifying. I didn't specify the train/test split so the dataset wasn't fully uplo... | 2022-11-10T10:29:31 | 2023-04-25T09:54:05 | 2022-11-22T11:24:19 | ### Describe the bug
I'm following the instructions in [https://huggingface.co/docs/datasets/audio_load#audiofolder-with-metadata](url) to be able to load a dataset from a local folder.
I have everything into a folder, into a train folder and then the audios and csv. When I try to load the dataset and run from term... | uriii3 | https://github.com/huggingface/datasets/issues/5224 | null | false |
1,442,610,658 | 5,223 | Add SQL guide | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5223). All of your documentation changes will be reflected on that endpoint.",
"I think we may want more content on this page that's not SQL related. Some of that content probably already lives in the main `load` docs page, but... | 2022-11-09T19:10:27 | 2022-11-15T17:40:25 | 2022-11-15T17:40:21 | This PR adapts @nateraw's awesome SQL notebook as a guide for the docs! | stevhliu | https://github.com/huggingface/datasets/pull/5223 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5223",
"html_url": "https://github.com/huggingface/datasets/pull/5223",
"diff_url": "https://github.com/huggingface/datasets/pull/5223.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5223.patch",
"merged_at": "2022-11-15T17:40... | true |
1,442,412,507 | 5,222 | HuggingFace website is incorrectly reporting that my datasets are pickled | closed | [
"cc @McPatate maybe you know what's happening ?",
"Yes I think I know what is happening. We check in zips for pickles, and the UI must display the pickle jar when a scan has an associated list of imports, even when empty.\r\n~I'll fix ASAP !~",
"> I'll fix ASAP !\r\n\r\nActually I'd rather leave it like that f... | 2022-11-09T16:41:16 | 2022-11-09T18:10:46 | 2022-11-09T18:06:57 | ### Describe the bug
HuggingFace is incorrectly reporting that my datasets are pickled. They are not picked, they are simple ZIP files containing PNG images.
Hopefully this is the right location to report this bug.
### Steps to reproduce the bug
Inspect my dataset respository here: https://huggingface.co/datasets... | ProGamerGov | https://github.com/huggingface/datasets/issues/5222 | null | false |
1,442,309,094 | 5,221 | Cannot push | closed | [
"Did you run `huggingface-cli lfs-enable-largefiles` before committing or before adding ? Maybe you can try before adding\r\n\r\nAnyway I'd encourage you to split your data into several TAR archives if possible, this way the dataset can loaded faster using multiprocessing (by giving each process a subset of shards ... | 2022-11-09T15:32:05 | 2022-11-10T18:11:21 | 2022-11-10T18:11:11 | ### Describe the bug
I am facing the issue when I try to push the tar.gz file around 11G to HUB.
```
(venv) ββlaptop@laptop ~/PersonalProjects/data/ulaanbal_v0 βΉmainββΊ
β°β$ du -sh *
4.0K README.md
13G data
516K test.jsonl
18M train.jsonl
4.0K ulaanbal_v0.py
11G ulaanbal_v0.tar.gz
452K validation.jsonl... | bayartsogt-ya | https://github.com/huggingface/datasets/issues/5221 | null | false |
1,441,664,377 | 5,220 | Implicit type conversion of lists in to_pandas | closed | [
"I think this behavior comes from PyArrow:\r\n```python\r\nimport pyarrow as pa\r\nt = pa.table({\"a\": [[0]]})\r\nt.to_pandas().a.values[0]\r\n# array([0])\r\n```\r\n\r\nI believe this has to do with zero-copy: you can get a pandas DataFrame without copying the buffers from arrow, and therefore end up with numpy a... | 2022-11-09T08:40:18 | 2022-11-10T16:12:26 | 2022-11-10T16:12:26 | ### Describe the bug
```
ds = Dataset.from_list([{'a':[1,2,3]}])
ds.to_pandas().a.values[0]
```
Results in `array([1, 2, 3])` -- a rather unexpected conversion of types which made downstream tools expecting lists not happy.
### Steps to reproduce the bug
See snippet
### Expected behavior
Keep the original typ... | sanderland | https://github.com/huggingface/datasets/issues/5220 | null | false |
1,441,255,910 | 5,219 | Delta Tables usage using Datasets Library | open | [
"Hi ! Interesting :) Can you provide concrete examples of cases where it can be useful ?",
"Few example blogs and posts that might help on this - \r\n\r\n1. https://hevodata.com/learn/databricks-delta-tables/\r\n2. https://docs.databricks.com/delta/index.html\r\n\r\nBasically, we are looking at utility of Dataset... | 2022-11-09T02:43:56 | 2023-03-02T19:29:12 | null | ### Feature request
Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well.
### Motivation
We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library... | reichenbch | https://github.com/huggingface/datasets/issues/5219 | null | false |
1,441,254,194 | 5,218 | Delta Tables usage using Datasets Library | closed | [] | 2022-11-09T02:42:18 | 2022-11-09T02:42:36 | 2022-11-09T02:42:36 | ### Feature request
Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well.
### Motivation
We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library... | rcv-koo | https://github.com/huggingface/datasets/issues/5218 | null | false |
1,441,252,740 | 5,217 | Reword E2E training and inference tips in the vision guides | closed | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-09T02:40:01 | 2022-11-10T01:38:09 | 2022-11-10T01:36:09 | Reference: https://github.com/huggingface/datasets/pull/5188#discussion_r1012148730 | sayakpaul | https://github.com/huggingface/datasets/pull/5217 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5217",
"html_url": "https://github.com/huggingface/datasets/pull/5217",
"diff_url": "https://github.com/huggingface/datasets/pull/5217.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5217.patch",
"merged_at": "2022-11-10T01:36... | true |
1,441,041,947 | 5,216 | save_elasticsearch_index | open | [
"Hi ! I think there exist tools to dump and reload an index in your elastic search but I'm not super familiar with it.\r\n\r\nAnyway after reloading an index in elastic search you can call `ds.load_elasticsearch_index` which will connect the index to the dataset without re-indexing"
] | 2022-11-08T23:06:52 | 2022-11-09T13:16:45 | null | Hi,
I am new to Dataset and elasticsearch. I was wondering is there any equivalent approach to save elasticsearch index as of save_faiss_index locally for later use, to remove the need to re-index a dataset? | amobash2 | https://github.com/huggingface/datasets/issues/5216 | null | false |
1,440,334,978 | 5,214 | Update github pr docs actions | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5214). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-08T14:43:37 | 2022-11-08T15:39:58 | 2022-11-08T15:39:57 | null | mishig25 | https://github.com/huggingface/datasets/pull/5214 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5214",
"html_url": "https://github.com/huggingface/datasets/pull/5214",
"diff_url": "https://github.com/huggingface/datasets/pull/5214.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5214.patch",
"merged_at": "2022-11-08T15:39... | true |
1,440,037,534 | 5,213 | Add support for different configs with `push_to_hub` | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5213). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface... | 2022-11-08T11:45:47 | 2022-12-02T16:48:23 | 2022-12-02T16:44:07 | will solve #5151
@lhoestq @albertvillanova @mariosasko
This is still a super draft so please ignore code issues but I want to discuss some conceptually important things.
I suggest a way to do `.push_to_hub("repo_id", "config_name")` with pushing parquet files to directories named as `config_name` (inside `data... | polinaeterna | https://github.com/huggingface/datasets/pull/5213 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5213",
"html_url": "https://github.com/huggingface/datasets/pull/5213",
"diff_url": "https://github.com/huggingface/datasets/pull/5213.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5213.patch",
"merged_at": null
} | true |
1,439,642,483 | 5,212 | Fix CI require_beam maximum compatible dill version | closed | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5212). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-08T07:30:01 | 2022-11-15T06:32:27 | 2022-11-15T06:32:26 | A previous commit to main branch introduced an additional requirement on maximum compatible `dill` version with `apache-beam` in our CI `require_beam`:
- d7c942228b8dcf4de64b00a3053dce59b335f618
- ec222b220b79f10c8d7b015769f0999b15959feb
This PR fixes the maximum compatible `dill` version with `apache-beam`, which... | albertvillanova | https://github.com/huggingface/datasets/pull/5212 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5212",
"html_url": "https://github.com/huggingface/datasets/pull/5212",
"diff_url": "https://github.com/huggingface/datasets/pull/5212.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5212.patch",
"merged_at": "2022-11-15T06:32... | true |
1,438,544,617 | 5,211 | Update Overview.ipynb google colab | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"WDYT @albertvillanova ?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5211). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-07T15:23:52 | 2022-11-29T15:59:48 | 2022-11-29T15:54:17 | - removed metrics stuff
- added image example
- added audio example (with ffmpeg instructions)
- updated the "add a new dataset" section | lhoestq | https://github.com/huggingface/datasets/pull/5211 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5211",
"html_url": "https://github.com/huggingface/datasets/pull/5211",
"diff_url": "https://github.com/huggingface/datasets/pull/5211.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5211.patch",
"merged_at": "2022-11-29T15:54... | true |
1,438,492,507 | 5,210 | Tweak readme | closed | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Nit: We should also update the `Disclaimers` section to let the dataset owners know they should use Hub discussions rather than GH issues for removal requests/updates",
"Updated the disclaimers section, thanks !\r\n\r\nDoes it soun... | 2022-11-07T14:51:23 | 2022-11-24T11:35:07 | 2022-11-24T11:26:16 | Tweaked some paragraphs mentioning the modalities we support + added a paragraph on security | lhoestq | https://github.com/huggingface/datasets/pull/5210 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5210",
"html_url": "https://github.com/huggingface/datasets/pull/5210",
"diff_url": "https://github.com/huggingface/datasets/pull/5210.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5210.patch",
"merged_at": "2022-11-24T11:26... | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.