id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
2,959,088,568
7,492
Closes #7457
closed
[ "This PR fixes issue #7457" ]
2025-03-30T20:41:20
2025-04-13T22:05:07
2025-04-13T22:05:07
This PR updates the documentation to include the HF_DATASETS_CACHE environment variable, which allows users to customize the cache location for datasets—similar to HF_HUB_CACHE for models.
Harry-Yang0518
https://github.com/huggingface/datasets/pull/7492
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7492", "html_url": "https://github.com/huggingface/datasets/pull/7492", "diff_url": "https://github.com/huggingface/datasets/pull/7492.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7492.patch", "merged_at": null }
true
2,959,085,647
7,491
docs: update cache.mdx to include HF_DATASETS_CACHE documentation
closed
[ "Already included HF_DATASETS_CACHE" ]
2025-03-30T20:35:03
2025-03-30T20:36:40
2025-03-30T20:36:40
null
Harry-Yang0518
https://github.com/huggingface/datasets/pull/7491
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7491", "html_url": "https://github.com/huggingface/datasets/pull/7491", "diff_url": "https://github.com/huggingface/datasets/pull/7491.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7491.patch", "merged_at": null }
true
2,958,826,222
7,490
(refactor) remove redundant logic in _check_valid_index_key
open
[]
2025-03-30T11:45:42
2025-03-30T11:50:22
null
This PR contributes a minor refactor, in a small function in `src/datasets/formatting/formatting.py`. No change in logic. In the original code, there are separate if-conditionals for `isinstance(key, range)` and `isinstance(key, Iterable)`, with essentially the same logic. This PR combines these two using a single if statement. **Considerations** 1. Although range in python is guaranteed to have integers, internally calling `int()` on an object that is already an int is negligible. (In python it returns the original object. It doesn't create a new integer object or perform any actual conversion) 2. Technically a range is already an Iterable, and we could just do `isinstance(key, Iterable)` but I explicitly did `isinstance(key, (range, Iterable))` just to be super obvious and consistent that both cases are handled because I see `slice, range, Iterable` everywhere in this `formatting.py` 3. This PR removes the `if len(key)>0` conditional. I think it is cleaner to have it this way for three reasons. - There was originally no else statement and the code would have failed silently anyway. - The if len(key)>0 should be caught much earlier, rather than in `formatting.py`. - There are actually multiple cases where this would fail, if len(key)>0, if key is non numeric or float, or if key is a list of lists. It's clunky to state all this and the error be thrown during max or indexing. **Previous PR and Issues Checks** 1. No known PR or Issues (both closed or open) in hf datasets repository **Tests** 1. Tested using Dataset (load_dataset("wikitext", "wikitext-103-raw-v1")), Pytorch DataLoader, with a Pytorch BatchSampler (list of indexes returned instead of single index).
suzyahyah
https://github.com/huggingface/datasets/pull/7490
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7490", "html_url": "https://github.com/huggingface/datasets/pull/7490", "diff_url": "https://github.com/huggingface/datasets/pull/7490.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7490.patch", "merged_at": null }
true
2,958,204,763
7,489
fix: loading of datasets from Disk(#7373)
open
[ "@nepfaff Could you confirm if this fixes the issue for you? I checked Memray, and everything looked good on my end.\r\n\r\nInstall: `pip install git+https://github.com/sam-hey/datasets.git@fix/concatenate_datasets`\r\n", "Will aim to get to this soon. I don't have a rapid testing pipeline setup but need to wait ...
2025-03-29T16:22:58
2025-04-24T16:36:36
null
Fixes dataset loading from disk by ensuring that memory maps and streams are properly closed. For more details, see https://github.com/huggingface/datasets/issues/7373.
sam-hey
https://github.com/huggingface/datasets/pull/7489
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7489", "html_url": "https://github.com/huggingface/datasets/pull/7489", "diff_url": "https://github.com/huggingface/datasets/pull/7489.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7489.patch", "merged_at": null }
true
2,956,559,358
7,488
Support underscore int read instruction
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7488). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "you rock, Quentin - thank you!" ]
2025-03-28T16:01:15
2025-03-28T16:20:44
2025-03-28T16:20:43
close https://github.com/huggingface/datasets/issues/7481
lhoestq
https://github.com/huggingface/datasets/pull/7488
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7488", "html_url": "https://github.com/huggingface/datasets/pull/7488", "diff_url": "https://github.com/huggingface/datasets/pull/7488.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7488.patch", "merged_at": "2025-03-28T16:20:43" }
true
2,956,533,448
7,487
Write pdf in map
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7487). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-03-28T15:49:25
2025-03-28T17:09:53
2025-03-28T17:09:51
Fix this error when mapping a PDF dataset ``` pyarrow.lib.ArrowInvalid: Could not convert <pdfplumber.pdf.PDF object at 0x13498ee40> with type PDF: did not recognize Python value type when inferring an Arrow data type ``` and also let map() outputs be lists of images or pdfs
lhoestq
https://github.com/huggingface/datasets/pull/7487
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7487", "html_url": "https://github.com/huggingface/datasets/pull/7487", "diff_url": "https://github.com/huggingface/datasets/pull/7487.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7487.patch", "merged_at": "2025-03-28T17:09:51" }
true
2,954,042,179
7,486
`shared_datadir` fixture is missing
closed
[ "OK I was missing the `pytest-datadir` package. Sorry for the noise!" ]
2025-03-27T18:17:12
2025-03-27T19:49:11
2025-03-27T19:49:10
### Describe the bug Running the tests for the latest release fails due to missing `shared_datadir` fixture. ### Steps to reproduce the bug Running `pytest` while building a package for Arch Linux leads to these errors: ``` ==================================== ERRORS ==================================== _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>1] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>2] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>3] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>4] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>5] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>6] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _______________ ERROR at setup of test_dataset_with_pdf_feature ________________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 34 @require_pdfplumber def test_dataset_with_pdf_feature(shared_datadir): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:34 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>0] _________ [gw46] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 ``` ### Expected behavior All fixtures used in tests should be available. ### Environment info Arch Linux build system, building the [python-datasets](https://gitlab.archlinux.org/archlinux/packaging/packages/python-datasets) package. There are actually [many deselected tests](https://gitlab.archlinux.org/archlinux/packaging/packages/python-datasets/-/blob/6f97957f0c326cc7b3da6b7f12326305bcaef374/PKGBUILD#L66-148) which were failing on previous releases, but these errors popped up in 3.5.0.
lahwaacz
https://github.com/huggingface/datasets/issues/7486
null
false
2,953,696,519
7,485
set dev version
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7485). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-03-27T16:39:34
2025-03-27T16:41:59
2025-03-27T16:39:42
null
lhoestq
https://github.com/huggingface/datasets/pull/7485
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7485", "html_url": "https://github.com/huggingface/datasets/pull/7485", "diff_url": "https://github.com/huggingface/datasets/pull/7485.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7485.patch", "merged_at": "2025-03-27T16:39:42" }
true
2,953,677,168
7,484
release: 3.5.0
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7484). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-03-27T16:33:27
2025-03-27T16:35:44
2025-03-27T16:34:22
null
lhoestq
https://github.com/huggingface/datasets/pull/7484
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7484", "html_url": "https://github.com/huggingface/datasets/pull/7484", "diff_url": "https://github.com/huggingface/datasets/pull/7484.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7484.patch", "merged_at": "2025-03-27T16:34:22" }
true
2,951,856,468
7,483
Support skip_trying_type
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7483). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Cool ! Can you run `make style` to fix code formatting ?\r\n\r\nI was also thinking of ...
2025-03-27T07:07:20
2025-04-29T04:14:57
2025-04-09T09:53:10
This PR addresses Issue #7472 cc: @lhoestq
yoshitomo-matsubara
https://github.com/huggingface/datasets/pull/7483
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7483", "html_url": "https://github.com/huggingface/datasets/pull/7483", "diff_url": "https://github.com/huggingface/datasets/pull/7483.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7483.patch", "merged_at": "2025-04-09T09:53:10" }
true
2,950,890,368
7,482
Implement capability to restore non-nullability in Features
closed
[ "Interestingly, this does not close #7479. The Features are not correctly maintained when calling `from_dict` with the custom Features.", "Unfortunately this PR does not fix the reported issue. After more digging:\r\n\r\n- when the dataset is created, nullability information is lost in Features;\r\n- even with th...
2025-03-26T22:16:09
2025-05-15T15:00:59
2025-05-15T15:00:59
This PR attempts to keep track of non_nullable pyarrow fields when converting a `pa.Schema` to `Features`. At the same time, when outputting the `arrow_schema`, the original non-nullable fields are restored. This allows for more consistent behavior and avoids breaking behavior as illustrated in #7479. I am by no means a pyarrow expert so some logic in `find_non_nullable_fields` may not perfect. Not sure if more logic (type checks) are needed for deep-checking a given schema. Maybe there are other pyarrow structures that need to be covered? Tests are added, but again, these may not have sufficient coverage in terms of pyarrow structure types. closes #7479
BramVanroy
https://github.com/huggingface/datasets/pull/7482
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7482", "html_url": "https://github.com/huggingface/datasets/pull/7482", "diff_url": "https://github.com/huggingface/datasets/pull/7482.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7482.patch", "merged_at": null }
true
2,950,692,971
7,481
deal with python `10_000` legal number in slice syntax
closed
[ "should be an easy fix, I opened a PR" ]
2025-03-26T20:10:54
2025-03-28T16:20:44
2025-03-28T16:20:44
### Feature request ``` In [6]: ds = datasets.load_dataset("HuggingFaceH4/ultrachat_200k", split="train_sft[:1000]") In [7]: ds = datasets.load_dataset("HuggingFaceH4/ultrachat_200k", split="train_sft[:1_000]") [dozens of frames skipped] File /usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py:444, in _str_to_read_instruction(spec) 442 res = _SUB_SPEC_RE.match(spec) 443 if not res: --> 444 raise ValueError(f"Unrecognized instruction format: {spec}") ValueError: Unrecognized instruction format: train_sft[:1_000] ``` It took me a while to understand what the problem was. But apparently `pyarrow` doesn't allow python numbers that may include `_` as in `1_000`. The `_` aids readability since `10_000_000` vs `10000000` is obviously easier to grasp of what the actual number is. Feature request: ideally `datasets` being a python module will do the right thing and convert python numbers into whatever pyarrow supports - in this case stripping `_`s. Second best it'd err and tell the user that using numbers with `_` in split slices is not acceptible, so that the user won't have to deal with a huge pyarrow assert they know nothing about. Thank you!
sfc-gh-sbekman
https://github.com/huggingface/datasets/issues/7481
null
false
2,950,315,214
7,480
HF_DATASETS_CACHE ignored?
open
[ "FWIW, it does eventually write to /tmp/roller/datasets when generating the final version.", "Hey, I’d love to work on this issue but I am a beginner, can I work it with you?", "Hi @lhoestq,\nI'd like to look into this issue but I'm still learning. Could you share any quick pointers on the HF_DATASETS_CACHE beh...
2025-03-26T17:19:34
2025-04-28T10:16:16
null
### Describe the bug I'm struggling to get things to respect HF_DATASETS_CACHE. Rationale: I'm on a system that uses NFS for homedir, so downloading to NFS is expensive, slow, and wastes valuable quota compared to local disk. Instead, it seems to rely mostly on HF_HUB_CACHE. Current version: 3.2.1dev. In the process of testing 3.4.0 ### Steps to reproduce the bug [Currently writing using datasets 3.2.1dev. Will follow up with 3.4.0 results] dump.py: ```python from datasets import load_dataset dataset = load_dataset("HuggingFaceFW/fineweb", name="sample-100BT", split="train") ``` Repro steps ```bash # ensure no cache $ mv ~/.cache/huggingface ~/.cache/huggingface.bak $ export HF_DATASETS_CACHE=/tmp/roller/datasets $ rm -rf ${HF_DATASETS_CACHE} $ env | grep HF | grep -v TOKEN HF_DATASETS_CACHE=/tmp/roller/datasets $ python dump.py # (omitted for brevity) # (while downloading) $ du -hcs ~/.cache/huggingface/hub 18G hub 18G total # (after downloading) $ du -hcs ~/.cache/huggingface/hub ``` It's a shame because datasets supports s3 (which I could really use right now) but hub does not. ### Expected behavior * ~/.cache/huggingface/hub stays empty * /tmp/roller/datasets becomes full of stuff ### Environment info [Currently writing using datasets 3.2.1dev. Will follow up with 3.4.0 results]
stephenroller
https://github.com/huggingface/datasets/issues/7480
null
false
2,950,235,396
7,479
Features.from_arrow_schema is destructive
open
[]
2025-03-26T16:46:43
2025-03-26T16:46:58
null
### Describe the bug I came across this, perhaps niche, bug where `Features` does not/cannot account for pyarrow's `nullable=False` option in Fields. Interestingly, I found that in regular "flat" fields this does not necessarily lead to conflicts, but when a non-nullable field is in a struct, an incompatibility arises. It's not easy to explain in words, so the minimal example below should help I hope. Note that I suggest a solution in the comments in the code, simply allowing `Dataset.to_parquet` to allow for a `schema` argument which, when provided, will override the default ds.features.arrow_schema. ### Steps to reproduce the bug ```python import os from datasets import Dataset, Features import pyarrow as pa import pyarrow.parquet as pq # HF datasets is destructive when you call Features.from_arrow_schema(schema) on a schema # because it will not account for nullable and non-nullable fields in structs (it will always allow nullable) # Reloading the same dataset with the original schema will raise an error because the schema is not the same anymore non_nullable_schema = pa.schema( [ pa.field("text", pa.string(), nullable=False), pa.field("meta", pa.struct( [ pa.field("date", pa.list_(pa.string()), nullable=False), ], ), ), ] ) print("ORIGINAL SCHEMA") print(non_nullable_schema) print() feats = Features.from_arrow_schema(non_nullable_schema) print("FEATUR-IZED SCHEMA (nullable-restrictions are gone)") print(feats.arrow_schema) print() ds = Dataset.from_dict( { "text": ["a", "b", "c"], "meta": [{"date": ["2021-01-01"]}, {"date": ["2021-01-02"]}, {"date": ["2021-01-03"]}], }, features=feats, ) fname = "tmp.parquet" # This is not possible: TypeError: pyarrow.parquet.core.ParquetWriter() got multiple values for keyword argument 'schema' # Though I believe this would be the easiest fix: allow schema to be passed to to_parquet and overwrite the schema in the dataset # ds.to_parquet(fname, schema=non_nullable_schema) ds.to_parquet(fname) try: _ = pq.read_table(fname, schema=non_nullable_schema) finally: os.unlink(fname) ``` ### Expected behavior - Non-destructive behavior when converting an arrow schema to Features; or - the ability to override the default arrow schema with a custom one ### Environment info - `datasets` version: 3.2.0 - Platform: Linux-5.14.0-427.20.1.el9_4.x86_64-x86_64-with-glibc2.34 - Python version: 3.11.10 - `huggingface_hub` version: 0.27.1 - PyArrow version: 18.1.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
BramVanroy
https://github.com/huggingface/datasets/issues/7479
null
false
2,948,993,461
7,478
update fsspec 2025.3.0
closed
[ "Sorry for tagging you @lhoestq but since you merged the linked PR, I wondered if you might be able to help me get this triaged so it can be reviewed/rejected etc. 🙏🏼 ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7478). All of your documentation changes will be reflec...
2025-03-26T09:53:05
2025-03-28T19:15:54
2025-03-28T15:51:55
It appears there have been two releases of fsspec since this dependency was last updated, it would be great if Datasets could be updated so that it didn't hold back the usage of newer fsspec versions in consuming projects. PR based on https://github.com/huggingface/datasets/pull/7352
peteski22
https://github.com/huggingface/datasets/pull/7478
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7478", "html_url": "https://github.com/huggingface/datasets/pull/7478", "diff_url": "https://github.com/huggingface/datasets/pull/7478.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7478.patch", "merged_at": "2025-03-28T15:51:54" }
true
2,947,169,460
7,477
What is the canonical way to compress a Dataset?
open
[ "I saw this post by @lhoestq: https://discuss.huggingface.co/t/increased-arrow-table-size-by-factor-of-2/26561/4 suggesting that there is at least some internal code for writing sharded parquet datasets non-concurrently. This appears to be that code: https://github.com/huggingface/datasets/blob/94ccd1b4fada8a92cea...
2025-03-25T16:47:51
2025-04-03T09:13:11
null
Given that Arrow is the preferred backend for a Dataset, what is a user supposed to do if they want concurrent reads, concurrent writes AND on-disk compression for a larger dataset? Parquet would be the obvious answer except that there is no native support for writing sharded, parquet datasets concurrently [[1](https://github.com/huggingface/datasets/issues/7047)]. Am I missing something? And if so, why is this not the standard/default way that `Dataset`'s work as they do in Xarray, Ray Data, Composer, etc.?
eric-czech
https://github.com/huggingface/datasets/issues/7477
null
false
2,946,997,924
7,476
Priotitize json
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7476). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-03-25T15:44:31
2025-03-25T15:47:00
2025-03-25T15:45:00
`datasets` should load the JSON data in https://huggingface.co/datasets/facebook/natural_reasoning, not the PDF
lhoestq
https://github.com/huggingface/datasets/pull/7476
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7476", "html_url": "https://github.com/huggingface/datasets/pull/7476", "diff_url": "https://github.com/huggingface/datasets/pull/7476.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7476.patch", "merged_at": "2025-03-25T15:45:00" }
true
2,946,640,570
7,475
IterableDataset's state_dict shard_example_idx is always equal to the number of samples in a shard
closed
[ "Hey, I’d love to work on this issue but I am a beginner, can I work it with you?", "Hello. I'm sorry but I don't have much time to get in the details for now.\nHave you managed to reproduce the issue with the code provided ?\nIf you want to work on it, you can self-assign and ask @lhoestq for directions", "Hi ...
2025-03-25T13:58:07
2025-05-06T14:22:19
2025-05-06T14:05:07
### Describe the bug I've noticed a strange behaviour with Iterable state_dict: the value of shard_example_idx is always equal to the amount of samples in a shard. ### Steps to reproduce the bug I am reusing the example from the doc ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=1) state_dict = None # Iterate through the dataset and print examples for idx, example in enumerate(ds): print(example) if idx == 2: state_dict = ds.state_dict() print("checkpoint") break print(state_dict) ``` Returns: ``` {'a': 0} {'a': 1} checkpoint {'examples_iterable': {'shard_idx': 0, 'shard_example_idx': 6, 'type': 'ArrowExamplesIterable'}, 'epoch': 0} ``` ### Expected behavior shard_example_idx should be 2 instead of 6 If we run with num_shards=2, then shard_example_idx is 3 instead of 2 and so on. ### Environment info - `datasets` version: 3.4.1 - Platform: macOS-14.6.1-arm64-arm-64bit - Python version: 3.12.9 - `huggingface_hub` version: 0.29.3 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
bruno-hays
https://github.com/huggingface/datasets/issues/7475
null
false
2,945,066,258
7,474
Remove conditions for Python < 3.9
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7474). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thanks ! can you run `make style` to fix code formatting ? then we can merge", "@lhoe...
2025-03-25T03:08:04
2025-04-16T00:11:06
2025-04-15T16:07:55
This PR remove conditions for Python < 3.9.
cyyever
https://github.com/huggingface/datasets/pull/7474
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7474", "html_url": "https://github.com/huggingface/datasets/pull/7474", "diff_url": "https://github.com/huggingface/datasets/pull/7474.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7474.patch", "merged_at": "2025-04-15T16:07:54" }
true
2,939,034,643
7,473
Webdataset data format problem
closed
[ "I was able to work around it" ]
2025-03-21T17:23:52
2025-03-21T19:19:58
2025-03-21T19:19:58
### Describe the bug Please see https://huggingface.co/datasets/ejschwartz/idioms/discussions/1 Error code: FileFormatMismatchBetweenSplitsError All three splits, train, test, and validation, use webdataset. But only the train split has more than one file. How can I force the other two splits to also be interpreted as being the webdataset format? (I don't think there is currently a way, but happy to be told that I am wrong.) ### Steps to reproduce the bug ``` import datasets datasets.load_dataset("ejschwartz/idioms") ### Expected behavior The dataset loads. Alternatively, there is a YAML syntax for manually specifying the format. ### Environment info - `datasets` version: 3.2.0 - Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.28.1 - PyArrow version: 19.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
edmcman
https://github.com/huggingface/datasets/issues/7473
null
false
2,937,607,272
7,472
Label casting during `map` process is canceled after the `map` process
closed
[ "Hi ! By default `map()` tries to keep the types of each column of the dataset, so here it reuses the int type since all your float values can be converted to integers. But I agree it would be nice to store float values as float values and don't try to reuse the same type in this case.\n\nIn the meantime, you can e...
2025-03-21T07:56:22
2025-04-10T05:11:15
2025-04-10T05:11:14
### Describe the bug When preprocessing a multi-label dataset, I introduced a step to convert int labels to float labels as [BCEWithLogitsLoss](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html) expects float labels and forward function of models in transformers package internally use `BCEWithLogitsLoss` However, the casting was canceled after `.map` process and the label values still use int values, which leads to an error ``` File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/transformers/models/bert/modeling_bert.py", line 1711, in forward loss = loss_fct(logits, labels) File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/loss.py", line 819, in forward return F.binary_cross_entropy_with_logits( File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/functional.py", line 3628, in binary_cross_entropy_with_logits return torch.binary_cross_entropy_with_logits( RuntimeError: result type Float can't be cast to the desired output type Long ``` This seems like happening only when the original labels are int values (see examples below) ### Steps to reproduce the bug If the original dataset uses a list of int labels, it will cancel the int->float casting ```python from datasets import Dataset data = { 'text': ['text1', 'text2', 'text3', 'text4'], 'labels': [[0, 1, 2], [3], [3, 4], [3]] } dataset = Dataset.from_dict(data) label_set = set([label for labels in data['labels'] for label in labels]) label2idx = {label: idx for idx, label in enumerate(sorted(label_set))} def multi_labels_to_ids(labels): ids = [0.0] * len(label2idx) for label in labels: ids[label2idx[label]] = 1.0 return ids def preprocess(examples): result = {'sentence': [[0, 3, 4] for _ in range(len(examples['labels']))]} print('"labels" are int', examples['labels']) result['labels'] = [multi_labels_to_ids(l) for l in examples['labels']] print('"labels" were converted to multi-label format with float values', result['labels']) return result preprocessed_dataset = dataset.map(preprocess, batched=True, remove_columns=['labels', 'text']) print(preprocessed_dataset[0]['labels']) # Output: "[1, 1, 1, 0, 0]" # Expected: "[1.0, 1.0, 1.0, 0.0, 0.0]" ``` If the original dataset uses non-int labels, it works as expected. ```python from datasets import Dataset data = { 'text': ['text1', 'text2', 'text3', 'text4'], 'labels': [['label1', 'label2', 'label3'], ['label4'], ['label4', 'label5'], ['label4']] } dataset = Dataset.from_dict(data) label_set = set([label for labels in data['labels'] for label in labels]) label2idx = {label: idx for idx, label in enumerate(sorted(label_set))} def multi_labels_to_ids(labels): ids = [0.0] * len(label2idx) for label in labels: ids[label2idx[label]] = 1.0 return ids def preprocess(examples): result = {'sentence': [[0, 3, 4] for _ in range(len(examples['labels']))]} print('"labels" are int', examples['labels']) result['labels'] = [multi_labels_to_ids(l) for l in examples['labels']] print('"labels" were converted to multi-label format with float values', result['labels']) return result preprocessed_dataset = dataset.map(preprocess, batched=True, remove_columns=['labels', 'text']) print(preprocessed_dataset[0]['labels']) # Output: "[1.0, 1.0, 1.0, 0.0, 0.0]" # Expected: "[1.0, 1.0, 1.0, 0.0, 0.0]" ``` Note that the only difference between these two examples is > 'labels': [[0, 1, 2], [3], [3, 4], [3]] v.s > 'labels': [['label1', 'label2', 'label3'], ['label4'], ['label4', 'label5'], ['label4']] ### Expected behavior Even if the original dataset uses a list of int labels, the int->float casting during `.map` process should not be canceled as shown in the above example ### Environment info OS Ubuntu 22.04 LTS Python 3.10.11 datasets v3.4.1
yoshitomo-matsubara
https://github.com/huggingface/datasets/issues/7472
null
false
2,937,530,069
7,471
Adding argument to `_get_data_files_patterns`
closed
[ "Hi ! The pattern can be specified in advance in YAML in the README.md of the dataset :)\n\nFor example\n\n```\n---\nconfigs:\n- config_name: default\n data_files:\n - split: train\n path: \"train/*\"\n - split: test\n path: \"test/*\"\n---\n```\n\nSee the docs at https://huggingface.co/docs/hub/en/dataset...
2025-03-21T07:17:53
2025-03-27T12:30:52
2025-03-26T07:26:27
### Feature request How about adding if the user already know about the pattern? https://github.com/huggingface/datasets/blob/a256b85cbc67aa3f0e75d32d6586afc507cf535b/src/datasets/data_files.py#L252 ### Motivation While using this load_dataset people might use 10M of images for the local files. However, due to searching all the appropriate file pattern in fsspec, purely searching this pattern takes more than 10 hours (real use-case). ### Your contribution Yeah I can make this happen if this seems valid. @lhoestq WDYT? such like ``` def _get_data_files_patterns(pattern_resolver: Callable[[str], list[str]], patterns: PATTERNS) -> dict[str, list[str]]: ```
SangbumChoi
https://github.com/huggingface/datasets/issues/7471
null
false
2,937,236,323
7,470
Is it possible to shard a single-sharded IterableDataset?
closed
[ "Hi ! Maybe you can look for an option in your dataset to partition your data based on a deterministic filter ? For example each worker could stream the data based on `row.id % num_shards` or something like that ?", "So the recommendation is to start out with multiple shards initially and re-sharding after is not...
2025-03-21T04:33:37
2025-05-09T22:51:46
2025-03-26T06:49:28
I thought https://github.com/huggingface/datasets/pull/7252 might be applicable but looking at it maybe not. Say we have a process, eg. a database query, that can return data in slightly different order each time. So, the initial query needs to be run by a single thread (not to mention running multiple times incurs more cost too). But the results are also big enough that we don't want to materialize it entirely and instead stream it with an IterableDataset. But after we have the results we want to split it up across workers to parallelize processing. Is something like this possible to do? Here's a failed attempt. The end result should be that each of the shards has unique data, but unfortunately with this attempt the generator gets run once in each shard and the results end up with duplicates... ``` import random import datasets def gen(): print('RUNNING GENERATOR!') items = list(range(10)) random.shuffle(items) yield from items ds = datasets.IterableDataset.from_generator(gen) print('dataset contents:') for item in ds: print(item) print() print('dataset contents (2):') for item in ds: print(item) print() num_shards = 3 def sharded(shard_id): for i, example in enumerate(ds): if i % num_shards in shard_id: yield example ds1 = datasets.IterableDataset.from_generator( sharded, gen_kwargs={'shard_id': list(range(num_shards))} ) for shard in range(num_shards): print('shard', shard) for item in ds1.shard(num_shards, shard): print(item) ```
jonathanasdf
https://github.com/huggingface/datasets/issues/7470
null
false
2,936,606,080
7,469
Custom split name with the web interface
closed
[]
2025-03-20T20:45:59
2025-03-21T07:20:37
2025-03-21T07:20:37
### Describe the bug According the doc here: https://huggingface.co/docs/hub/datasets-file-names-and-splits#custom-split-name it should infer the split name from the subdir of data or the beg of the name of the files in data. When doing this manually through web upload it does not work. it uses "train" as a unique split. example: https://huggingface.co/datasets/eole-nlp/estimator_chatml ### Steps to reproduce the bug follow the link above ### Expected behavior there should be two splits "mlqe" and "1720_da" ### Environment info website
vince62s
https://github.com/huggingface/datasets/issues/7469
null
false
2,934,094,103
7,468
function `load_dataset` can't solve folder path with regex characters like "[]"
open
[ "Hi ! Have you tried escaping the glob special characters `[` and `]` ?\n\nbtw note that`AbstractFileSystem.glob` doesn't support regex, instead it supports glob patterns as in the python library [glob](https://docs.python.org/3/library/glob.html)\n" ]
2025-03-20T05:21:59
2025-03-25T10:18:12
null
### Describe the bug When using the `load_dataset` function with a folder path containing regex special characters (such as "[]"), the issue occurs due to how the path is handled in the `resolve_pattern` function. This function passes the unprocessed path directly to `AbstractFileSystem.glob`, which supports regular expressions. As a result, the globbing mechanism interprets these characters as regex patterns, leading to a traversal of the entire disk partition instead of confining the search to the intended directory. ### Steps to reproduce the bug just create a folder like `E:\[D_DATA]\koch_test`, then `load_dataset("parquet", data_dir="E:\[D_DATA]\\test", split="train")` it will keep searching the whole disk. I add two `print` in `glob` and `resolve_pattern` to see the path ### Expected behavior it should load the dataset as in normal folders ### Environment info - `datasets` version: 3.3.2 - Platform: Windows-10-10.0.22631-SP0 - Python version: 3.10.16 - `huggingface_hub` version: 0.29.1 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
Hpeox
https://github.com/huggingface/datasets/issues/7468
null
false
2,930,067,107
7,467
load_dataset with streaming hangs on parquet datasets
open
[ "Hi ! The issue comes from `pyarrow`, I reported it here: https://github.com/apache/arrow/issues/45214 (feel free to comment / thumb up).\n\nAlternatively we can try to find something else than `ParquetFileFragment.to_batches()` to iterate on Parquet data and keep the option the pass `filters=`..." ]
2025-03-18T23:33:54
2025-03-25T10:28:04
null
### Describe the bug When I try to load a dataset with parquet files (e.g. "bigcode/the-stack") the dataset loads, but python interpreter can't exit and hangs ### Steps to reproduce the bug ```python3 import datasets print('Start') dataset = datasets.load_dataset("bigcode/the-stack", data_dir="data/yaml", streaming=True, split="train") it = iter(dataset) next(it) print('Finish') ``` The program prints finish but doesn't exit and hangs indefinitely. I tried this on two different machines and several datasets. ### Expected behavior The program exits successfully ### Environment info datasets==3.4.1 Python 3.12.9. MacOS and Ubuntu Linux
The0nix
https://github.com/huggingface/datasets/issues/7467
null
false
2,928,661,327
7,466
Fix local pdf loading
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7466). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-03-18T14:09:06
2025-03-18T14:11:52
2025-03-18T14:09:21
fir this error when accessing a local pdf ``` File ~/.pyenv/versions/3.12.2/envs/hf-datasets/lib/python3.12/site-packages/pdfminer/psparser.py:220, in PSBaseParser.seek(self, pos) 218 """Seeks the parser to the given position.""" 219 log.debug("seek: %r", pos) --> 220 self.fp.seek(pos) 221 # reset the status for nextline() 222 self.bufpos = pos ValueError: seek of closed file ```
lhoestq
https://github.com/huggingface/datasets/pull/7466
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7466", "html_url": "https://github.com/huggingface/datasets/pull/7466", "diff_url": "https://github.com/huggingface/datasets/pull/7466.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7466.patch", "merged_at": "2025-03-18T14:09:21" }
true
2,926,478,838
7,464
Minor fix for metadata files in extension counter
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7464). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-03-17T21:57:11
2025-03-18T15:21:43
2025-03-18T15:21:41
null
lhoestq
https://github.com/huggingface/datasets/pull/7464
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7464", "html_url": "https://github.com/huggingface/datasets/pull/7464", "diff_url": "https://github.com/huggingface/datasets/pull/7464.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7464.patch", "merged_at": "2025-03-18T15:21:41" }
true
2,925,924,452
7,463
Adds EXR format to store depth images in float32
open
[ "Hi ! I'mn wondering if this shouldn't this be an `Image()` type and decoded as a `PIL.Image` ?\r\n\r\nThis would make it easier to integrate with the rest of the HF ecosystem, and you could still get a numpy array using `ds = ds.with_format(\"numpy\")` which sets all the images to be formatted as numpy arrays", ...
2025-03-17T17:42:40
2025-04-02T12:33:39
null
This PR adds the EXR feature to store depth images (or can be normals, etc) in float32. It relies on [openexr_numpy](https://github.com/martinResearch/openexr_numpy/tree/main) to manipulate EXR images.
ducha-aiki
https://github.com/huggingface/datasets/pull/7463
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7463", "html_url": "https://github.com/huggingface/datasets/pull/7463", "diff_url": "https://github.com/huggingface/datasets/pull/7463.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7463.patch", "merged_at": null }
true
2,925,612,945
7,462
set dev version
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7462). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-03-17T16:00:53
2025-03-17T16:03:31
2025-03-17T16:01:08
null
lhoestq
https://github.com/huggingface/datasets/pull/7462
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7462", "html_url": "https://github.com/huggingface/datasets/pull/7462", "diff_url": "https://github.com/huggingface/datasets/pull/7462.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7462.patch", "merged_at": "2025-03-17T16:01:08" }
true
2,925,608,123
7,461
List of images behave differently on IterableDataset and Dataset
closed
[ "Hi ! Can you try with `datasets` ^3.4 released recently ? on my side it works with IterableDataset on the recent version :)\n\n```python\nIn [20]: def train_iterable_gen():\n ...: images = np.array(load_image(\"https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg\")...
2025-03-17T15:59:23
2025-03-18T08:57:17
2025-03-18T08:57:16
### Describe the bug This code: ```python def train_iterable_gen(): images = np.array(load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg").resize((128, 128))) yield { "images": np.expand_dims(images, axis=0), "messages": [ { "role": "user", "content": [{"type": "image", "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" }] }, { "role": "assistant", "content": [{"type": "text", "text": "duck" }] } ] } train_ds = Dataset.from_generator(train_iterable_gen, features=Features({ 'images': [datasets.Image(mode=None, decode=True, id=None)], 'messages': [{'content': [{'text': datasets.Value(dtype='string', id=None), 'type': datasets.Value(dtype='string', id=None) }], 'role': datasets.Value(dtype='string', id=None)}] } ) ) ``` works as I'd expect; if I iterate the dataset then the `images` column returns a `List[PIL.Image.Image]`, i.e. `'images': [<PIL.PngImagePlugin.PngImageFile image mode=RGB size=128x128 at 0x77EFB7EF4680>]`. But if I change `Dataset` to `IterableDataset`, the `images` column changes into `'images': [{'path': None, 'bytes': ..]` ### Steps to reproduce the bug The code above + ```python def load_image(url): response = requests.get(url) image = Image.open(io.BytesIO(response.content)) return image ``` I'm feeding it to SFTTrainer ### Expected behavior Dataset and IterableDataset would behave the same ### Environment info ```yaml requires-python = ">=3.12" dependencies = [ "av>=14.1.0", "boto3>=1.36.7", "datasets>=3.3.2", "docker>=7.1.0", "google-cloud-storage>=2.19.0", "grpcio>=1.70.0", "grpcio-tools>=1.70.0", "moviepy>=2.1.2", "open-clip-torch>=2.31.0", "opencv-python>=4.11.0.86; sys_platform == 'darwin'", "opencv-python-headless>=4.11.0.86; sys_platform == 'linux'", "pandas>=2.2.3", "pillow>=10.4.0", "plotly>=6.0.0", "py-spy>=0.4.0", "pydantic>=2.10.6", "pydantic-settings>=2.7.1", "pymysql>=1.1.1", "ray[data,default,serve,train,tune]>=2.43.0", "torch>=2.6.0", "torchmetrics>=1.6.1", "torchvision>=0.21.0", "transformers[torch]@git+https://github.com/huggingface/transformers", "wandb>=0.19.4", # https://github.com/Dao-AILab/flash-attention/issues/833 "flash-attn @ https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.3/flash_attn-2.7.3+cu12torch2.6cxx11abiFALSE-cp312-cp312-linux_x86_64.whl; sys_platform == 'linux'", "trl@https://github.com/huggingface/trl.git", "peft>=0.14.0", ] ```
FredrikNoren
https://github.com/huggingface/datasets/issues/7461
null
false
2,925,605,865
7,460
release: 3.4.1
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7460). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-03-17T15:58:31
2025-03-17T16:01:14
2025-03-17T15:59:19
null
lhoestq
https://github.com/huggingface/datasets/pull/7460
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7460", "html_url": "https://github.com/huggingface/datasets/pull/7460", "diff_url": "https://github.com/huggingface/datasets/pull/7460.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7460.patch", "merged_at": "2025-03-17T15:59:19" }
true
2,925,491,766
7,459
Fix data_files filtering
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7459). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-03-17T15:20:21
2025-03-17T15:25:56
2025-03-17T15:25:54
close https://github.com/huggingface/datasets/issues/7458
lhoestq
https://github.com/huggingface/datasets/pull/7459
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7459", "html_url": "https://github.com/huggingface/datasets/pull/7459", "diff_url": "https://github.com/huggingface/datasets/pull/7459.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7459.patch", "merged_at": "2025-03-17T15:25:53" }
true
2,925,403,528
7,458
Loading the `laion/filtered-wit` dataset in streaming mode fails on v3.4.0
closed
[ "thanks for reporting, I released 3.4.1 with a fix" ]
2025-03-17T14:54:02
2025-03-17T16:02:04
2025-03-17T15:25:55
### Describe the bug Loading https://huggingface.co/datasets/laion/filtered-wit in streaming mode fails after update to `datasets==3.4.0`. The dataset loads fine on v3.3.2. ### Steps to reproduce the bug Steps to reproduce: ``` pip install datastes==3.4.0 python -c "from datasets import load_dataset; load_dataset('laion/filtered-wit', split='train', streaming=True)" ``` Results in: ``` $ python -c "from datasets import load_dataset; load_dataset('laion/filtered-wit', split='train', streaming=True)" Repo card metadata block was not found. Setting CardData to empty. Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 560/560 [00:00<00:00, 2280.24it/s] Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/load.py", line 2080, in load_dataset return builder_instance.as_streaming_dataset(split=split) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/builder.py", line 1265, in as_streaming_dataset splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 49, in _split_generators data_files = dl_manager.download_and_extract(self.config.data_files) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 169, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 121, in extract urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 496, in map_nested mapped = [ File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 497, in <listcomp> map_nested( File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 513, in map_nested mapped = [ File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 514, in <listcomp> _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 375, in _single_map_nested return function(data_struct) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 131, in _extract raise NotImplementedError( NotImplementedError: Extraction protocol for TAR archives like 'hf://datasets/laion/filtered-wit@c38ca7464e9934d9a49f88b3f60f5ad63b245465/data/00000.tar' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead. Example usage: url = dl_manager.download(url) tar_archive_iterator = dl_manager.iter_archive(url) for filename, file in tar_archive_iterator: ... ``` ### Expected behavior Dataset loads successfully. ### Environment info Ubuntu 20.04.6. Python 3.9. Datasets 3.4.0. pip freeze: ``` aiohappyeyeballs==2.6.1 aiohttp==3.11.14 aiosignal==1.3.2 async-timeout==5.0.1 attrs==25.3.0 certifi==2025.1.31 charset-normalizer==3.4.1 datasets==3.4.0 dill==0.3.8 filelock==3.18.0 frozenlist==1.5.0 fsspec==2024.12.0 huggingface-hub==0.29.3 idna==3.10 multidict==6.1.0 multiprocess==0.70.16 numpy==2.0.2 packaging==24.2 pandas==2.2.3 propcache==0.3.0 pyarrow==19.0.1 python-dateutil==2.9.0.post0 pytz==2025.1 PyYAML==6.0.2 requests==2.32.3 six==1.17.0 tqdm==4.67.1 typing_extensions==4.12.2 tzdata==2025.1 urllib3==2.3.0 xxhash==3.5.0 yarl==1.18.3 ```
nikita-savelyevv
https://github.com/huggingface/datasets/issues/7458
null
false
2,924,886,467
7,457
Document the HF_DATASETS_CACHE env variable
closed
[ "Strongly agree to this, in addition, I am also suffering to change the cache location similar to other issues (since I changed the environmental variables).\nhttps://github.com/huggingface/datasets/issues/6886", "`HF_DATASETS_CACHE` should be documented there indeed, feel free to open a PR :) ", "Hey, I’d love...
2025-03-17T12:24:50
2025-05-06T15:54:39
2025-05-06T15:54:39
### Feature request Hello, I have a use case where my team is sharing models and dataset in shared directory to avoid duplication. I noticed that the [cache documentation for datasets](https://huggingface.co/docs/datasets/main/en/cache) only mention the `HF_HOME` environment variable but never the `HF_DATASETS_CACHE`. It should be nice to add `HF_DATASETS_CACHE` to datasets documentation if it's an intended feature. If it's not, I think a depreciation warning would be appreciated. ### Motivation This variable is fully working and similar to what `HF_HUB_CACHE` does for models, so it's nice to know that this exists. This seems to be a quick change to implement. ### Your contribution I could contribute since this is only affecting a small portion of the documentation
LSerranoPEReN
https://github.com/huggingface/datasets/issues/7457
null
false
2,922,676,278
7,456
.add_faiss_index and .add_elasticsearch_index returns ImportError at Google Colab
open
[ "I can fix this.\nIt's mainly because faiss-gpu requires python<=3.10 but the default python version in colab is 3.11. We just have to downgrade the CPython version down to 3.10 and it should work fine.\n", "I think I just had no chance to meet with faiss-cpu.\nIt could be import problem? \n_has_faiss gets its va...
2025-03-16T00:51:49
2025-03-17T15:57:19
null
### Describe the bug At Google Colab ```!pip install faiss-cpu``` works ```import faiss``` no error but ```embeddings_dataset.add_faiss_index(column='embeddings')``` returns ``` [/usr/local/lib/python3.11/dist-packages/datasets/search.py](https://localhost:8080/#) in init(self, device, string_factory, metric_type, custom_index) 247 self.faiss_index = custom_index 248 if not _has_faiss: --> 249 raise ImportError( 250 "You must install Faiss to use FaissIndex. To do so you can run conda install -c pytorch faiss-cpu or conda install -c pytorch faiss-gpu. " 251 "A community supported package is also available on pypi: pip install faiss-cpu or pip install faiss-gpu. " ``` because ```_has_faiss = importlib.util.find_spec("faiss") is not None``` at the beginning of ```datasets/search.py``` returns ```False``` when the same code at colab notebook returns ```ModuleSpec(name='faiss', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7b7851449f50>, origin='/usr/local/lib/python3.11/dist-packages/faiss/init.py', submodule_search_locations=['/usr/local/lib/python3.11/dist-packages/faiss'])``` But ``` import datasets datasets.search._has_faiss ``` at ```colab notebook``` also returns ```False``` The same story with ```_has_elasticsearch``` ### Steps to reproduce the bug 1. Follow https://huggingface.co/learn/nlp-course/chapter5/6?fw=pt at Google Colab 2. till ```embeddings_dataset.add_faiss_index(column='embeddings')``` 3. ```embeddings_dataset.add_elasticsearch_index(column='embeddings')``` 4. https://colab.research.google.com/drive/1h2cjuiClblqzbNQgrcoLYOC8zBqTLLcv#scrollTo=3ddzRp72auOF ### Expected behavior I've only started Tutorial and don't know exactly. But something tells me that ```embeddings_dataset.add_faiss_index(column='embeddings')``` should work without ```Import Error``` ### Environment info Google Colab notebook with default config
MapleBloom
https://github.com/huggingface/datasets/issues/7456
null
false
2,921,933,250
7,455
Problems with local dataset after upgrade from 3.3.2 to 3.4.0
open
[ "Hi ! I just released 3.4.1 with a fix, let me know if it's working now !" ]
2025-03-15T09:22:50
2025-03-17T16:20:43
null
### Describe the bug I was not able to open a local saved dataset anymore that was created using an older datasets version after the upgrade yesterday from datasets 3.3.2 to 3.4.0 The traceback is ``` Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/arrow/arrow.py", line 67, in _generate_tables batches = pa.ipc.open_stream(f) File "/usr/local/lib/python3.10/dist-packages/pyarrow/ipc.py", line 190, in open_stream return RecordBatchStreamReader(source, options=options, File "/usr/local/lib/python3.10/dist-packages/pyarrow/ipc.py", line 52, in __init__ self._open(source, options=options, memory_pool=memory_pool) File "pyarrow/ipc.pxi", line 1006, in pyarrow.lib._RecordBatchStreamReader._open File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Expected to read 538970747 metadata bytes, but only read 2126 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1855, in _prepare_split_single for _, table in generator: File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/arrow/arrow.py", line 69, in _generate_tables reader = pa.ipc.open_file(f) File "/usr/local/lib/python3.10/dist-packages/pyarrow/ipc.py", line 234, in open_file return RecordBatchFileReader( File "/usr/local/lib/python3.10/dist-packages/pyarrow/ipc.py", line 110, in __init__ self._open(source, footer_offset=footer_offset, File "pyarrow/ipc.pxi", line 1090, in pyarrow.lib._RecordBatchFileReader._open File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Not an Arrow file ``` ### Steps to reproduce the bug Load a dataset from a local folder with ``` dataset = load_dataset( args.train_data_dir, cache_dir=args.cache_dir, ) ``` as it is done for example in the training script for SD3 controlnet. This is the minimal script to test it: ``` from datasets import load_dataset def main(): dataset = load_dataset( "local_dataset", ) print(dataset) print("Sample data:", dataset["train"][0]) if __name__ == "__main__": main() ```` ### Expected behavior Work in 3.4.0 like in 3.3.2 ### Environment info - `datasets` version: 3.4.0 - Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.29.3 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
andjoer
https://github.com/huggingface/datasets/issues/7455
null
false
2,920,760,793
7,454
set dev version
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7454). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-03-14T16:48:19
2025-03-14T16:50:31
2025-03-14T16:48:28
null
lhoestq
https://github.com/huggingface/datasets/pull/7454
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7454", "html_url": "https://github.com/huggingface/datasets/pull/7454", "diff_url": "https://github.com/huggingface/datasets/pull/7454.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7454.patch", "merged_at": "2025-03-14T16:48:28" }
true
2,920,719,503
7,453
release: 3.4.0
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7453). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-03-14T16:30:45
2025-03-14T16:38:10
2025-03-14T16:38:08
null
lhoestq
https://github.com/huggingface/datasets/pull/7453
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7453", "html_url": "https://github.com/huggingface/datasets/pull/7453", "diff_url": "https://github.com/huggingface/datasets/pull/7453.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7453.patch", "merged_at": "2025-03-14T16:38:08" }
true
2,920,354,783
7,452
minor docs changes
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7452). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-03-14T14:14:04
2025-03-14T14:16:38
2025-03-14T14:14:20
before the release
lhoestq
https://github.com/huggingface/datasets/pull/7452
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7452", "html_url": "https://github.com/huggingface/datasets/pull/7452", "diff_url": "https://github.com/huggingface/datasets/pull/7452.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7452.patch", "merged_at": "2025-03-14T14:14:20" }
true
2,919,835,663
7,451
Fix resuming after `ds.set_epoch(new_epoch)`
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7451). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-03-14T10:31:25
2025-03-14T10:50:11
2025-03-14T10:50:09
close https://github.com/huggingface/datasets/issues/7447
lhoestq
https://github.com/huggingface/datasets/pull/7451
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7451", "html_url": "https://github.com/huggingface/datasets/pull/7451", "diff_url": "https://github.com/huggingface/datasets/pull/7451.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7451.patch", "merged_at": "2025-03-14T10:50:09" }
true
2,916,681,414
7,450
Add IterableDataset.decode with multithreading
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7450). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-03-13T10:41:35
2025-03-14T10:35:37
2025-03-14T10:35:35
Useful for dataset streaming for multimodal datasets, and especially for lerobot. It speeds up streaming up to 20 times. When decoding is enabled (default), media types are decoded: * audio -> dict of "array" and "sampling_rate" and "path" * image -> PIL.Image * video -> torchvision.io.VideoReader You can enable multithreading using `num_threads`. This is especially useful to speed up remote data streaming. However it can be slower than `num_threads=0` for local data on fast disks. PS: Disabling decoding is useful if you want to iterate on the paths or bytes of the media files without actually decoding their content. Example: Speed up streaming with multithreading: ```py >>> import os >>> from datasets import load_dataset >>> from tqdm import tqdm >>> ds = load_dataset("sshh12/planet-textures", split="train", streaming=True) >>> num_threads = min(32, (os.cpu_count() or 1) + 4) >>> ds = ds.decode(num_threads=num_threads) >>> for _ in tqdm(ds): # 20 times faster ! ... ... ``` why not multiprocessing ? decoding is done with the GIL released in soundfile/PIL/torchvision so multiprocessing would just use more memory TODO - [x] test - [x] add to docs
lhoestq
https://github.com/huggingface/datasets/pull/7450
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7450", "html_url": "https://github.com/huggingface/datasets/pull/7450", "diff_url": "https://github.com/huggingface/datasets/pull/7450.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7450.patch", "merged_at": "2025-03-14T10:35:35" }
true
2,916,235,092
7,449
Cannot load data with different schemas from different parquet files
closed
[ "Hi ! `load_dataset` expects all the data_files to have the same schema.\n\nMaybe you can try enforcing certain `features` using:\n\n```python\nfeatures = Features({\"conversations\": {'content': Value('string'), 'role': Value('string',)}})\nds = load_dataset(..., features=features)\n```", "Thanks! It works if I ...
2025-03-13T08:14:49
2025-03-17T07:27:48
2025-03-17T07:27:46
### Describe the bug Cannot load samples with optional fields from different files. The schema cannot be correctly derived. ### Steps to reproduce the bug When I place two samples with an optional field `some_extra_field` within a single parquet file, it can be loaded via `load_dataset`. ```python import pandas as pd from datasets import load_dataset data = [ {'conversations': {'role': 'user', 'content': 'hello'}}, {'conversations': {'role': 'user', 'content': 'hi', 'some_extra_field': 'some_value'}} ] df = pd.DataFrame(data) df.to_parquet('data.parquet') dataset = load_dataset('parquet', data_files='data.parquet', split='train') print(dataset.features) ``` The schema can be derived. `some_extra_field` is set to None for the first row where it is absent. ``` {'conversations': {'content': Value(dtype='string', id=None), 'role': Value(dtype='string', id=None), 'some_extra_field': Value(dtype='string', id=None)}} ``` However, when I separate the samples into different files, it cannot be loaded. ```python import pandas as pd from datasets import load_dataset data1 = [{'conversations': {'role': 'user', 'content': 'hello'}}] pd.DataFrame(data1).to_parquet('data1.parquet') data2 = [{'conversations': {'role': 'user', 'content': 'hi', 'some_extra_field': 'some_value'}}] pd.DataFrame(data2).to_parquet('data2.parquet') dataset = load_dataset('parquet', data_files=['data1.parquet', 'data2.parquet'], split='train') print(dataset.features) ``` Traceback: ``` Traceback (most recent call last): File "/home/tiger/.local/lib/python3.9/site-packages/datasets/builder.py", line 1854, in _prepare_split_single for _, table in generator: File "/home/tiger/.local/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table) File "/home/tiger/.local/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table pa_table = table_cast(pa_table, self.info.features.arrow_schema) File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast return cast_table_to_schema(table, schema) File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 2245, in cast_table_to_schema arrays = [ File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 2246, in <listcomp> cast_array_to_feature( File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 2108, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}") TypeError: Couldn't cast array of type struct<content: string, role: string, some_extra_field: string> to {'content': Value(dtype='string', id=None), 'role': Value(dtype='string', id=None)} ``` ### Expected behavior Correctly load data with optional fields from different parquet files. ### Environment info - `datasets` version: 3.3.2 - Platform: Linux-5.10.135.bsk.4-amd64-x86_64-with-glibc2.31 - Python version: 3.9.2 - `huggingface_hub` version: 0.28.1 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
li-plus
https://github.com/huggingface/datasets/issues/7449
null
false
2,916,025,762
7,448
`datasets.disable_caching` doesn't work
open
[ "cc", "Yes I have the same issue. It's a confusingly named function. See [here](https://github.com/huggingface/datasets/blob/main/src/datasets/fingerprint.py#L115-L130)\n\n```\n...\nIf disabled, the library will no longer reload cached datasets files when applying transforms to the datasets.\n More precisely...
2025-03-13T06:40:12
2025-03-22T04:37:07
null
When I use `Dataset.from_generator(my_gen)` to load my dataset, it simply skips my changes to the generator function. I tried `datasets.disable_caching`, but it doesn't work!
UCC-team
https://github.com/huggingface/datasets/issues/7448
null
false
2,915,233,248
7,447
Epochs shortened after resuming mid-epoch with Iterable dataset+StatefulDataloader(persistent_workers=True)
closed
[ "Thanks for reporting ! Maybe we should store the epoch in the state_dict, and then when the dataset is iterated on again after setting a new epoch it should restart from scratch instead of resuming ? wdyt ?", "But why does this only happen when `persistent_workers=True`? I would expect it to work correctly even ...
2025-03-12T21:41:05
2025-07-09T23:04:57
2025-03-14T10:50:10
### Describe the bug When `torchdata.stateful_dataloader.StatefulDataloader(persistent_workers=True)` the epochs after resuming only iterate through the examples that were left in the epoch when the training was interrupted. For example, in the script below training is interrupted on step 124 (epoch 1) when 3 batches are left. Then after resuming, the rest of epochs (2 and 3) only iterate through these 3 batches. ### Steps to reproduce the bug Run the following script with and with PERSISTENT_WORKERS=true. ```python # !/usr/bin/env python3 # torch==2.5.1 # datasets==3.3.2 # torchdata>=0.9.0 import datasets import pprint from torchdata.stateful_dataloader import StatefulDataLoader import os PERSISTENT_WORKERS = ( os.environ.get("PERSISTENT_WORKERS", "False").lower() == "true" ) # PERSISTENT_WORKERS = True # Incorrect resume # ds = datasets.load_from_disk("dataset").to_iterable_dataset(num_shards=4) def generator(): for i in range(128): yield {"x": i} ds = datasets.Dataset.from_generator( generator, features=datasets.Features({"x": datasets.Value("int32")}) ).to_iterable_dataset(num_shards=4) dl = StatefulDataLoader( ds, batch_size=2, num_workers=2, persistent_workers=PERSISTENT_WORKERS ) global_step = 0 epoch = 0 ds_state_dict = None state_dict = None resumed = False while True: if epoch >= 3: break if state_dict is not None: dl.load_state_dict(state_dict) state_dict = None ds_state_dict = None resumed = True print("resumed") for i, batch in enumerate(dl): print(f"epoch: {epoch}, global_step: {global_step}, batch: {batch}") global_step += 1 # consume datapoint # simulate error if global_step == 124 and not resumed: ds_state_dict = ds.state_dict() state_dict = dl.state_dict() print("checkpoint") print("ds_state_dict") pprint.pprint(ds_state_dict) print("dl_state_dict") pprint.pprint(state_dict) break if state_dict is None: ds.set_epoch(epoch) epoch += 1 ``` The script checkpoints when there are three batches left in the second epoch. After resuming, only the last three batches are repeated in the rest of the epochs. If it helps, following are the two state_dicts for the dataloader save at the same step with the two settings. The left one is for `PERSISTENT_WORKERS=False` ![Image](https://github.com/user-attachments/assets/c97d6502-d7bd-4ef4-ae2d-66fe1a9732b1) ### Expected behavior All the elements in the dataset should be iterated through in the epochs following the one where we resumed. The expected behavior can be seen by setting `PERSISTENT_WORKERS=False`. ### Environment info torch==2.5.1 datasets==3.3.2 torchdata>=0.9.0
dhruvdcoder
https://github.com/huggingface/datasets/issues/7447
null
false
2,913,050,552
7,446
pyarrow.lib.ArrowTypeError: Expected dict key of type str or bytes, got 'int'
closed
[ "I think the Counter object you used in 'labels' may be the issue, since the {2:1} inside is the dict and 2 is the int", "> I think the Counter object you used in 'labels' may be the issue, since the {2:1} inside is the dict and 2 is the int我认为您在 'labels' 中使用的 Counter 对象可能是问题所在,因为里面的 {2:1} 是 dict,而 2 是 int\n\nYes...
2025-03-12T07:48:37
2025-07-04T05:14:45
2025-07-04T05:14:45
### Describe the bug A dict with its keys are all str but get following error ```python test_data=[{'input_ids':[1,2,3],'labels':[[Counter({2:1})]]}] dataset = datasets.Dataset.from_list(test_data) ``` ```bash pyarrow.lib.ArrowTypeError: Expected dict key of type str or bytes, got 'int' ``` ### Steps to reproduce the bug . ### Expected behavior . ### Environment info datasets 3.3.2
rangehow
https://github.com/huggingface/datasets/issues/7446
null
false
2,911,507,923
7,445
Fix small bugs with async map
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7445). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-03-11T18:30:57
2025-03-13T10:38:03
2025-03-13T10:37:58
helpful for the next PR to enable parallel image/audio/video decoding and make multimodal datasets go brr (e.g. for lerobot) - fix with_indices - fix resuming with save_state_dict() / load_state_dict() - omg that wasn't easy - remove unnecessary decoding in map() to enable parallelism in FormattedExampleIterable later small bonus: keeping features in batch()
lhoestq
https://github.com/huggingface/datasets/pull/7445
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7445", "html_url": "https://github.com/huggingface/datasets/pull/7445", "diff_url": "https://github.com/huggingface/datasets/pull/7445.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7445.patch", "merged_at": "2025-03-13T10:37:58" }
true
2,911,202,445
7,444
Excessive warnings when resuming an IterableDataset+buffered shuffle+DDP.
open
[ "I had a similar issue when loading the saved iterable dataset state to fast-forward to the mid-train location before resuming. This happened when I shuffled a concatenated dataset. A `iterable_data_state_dict.json` file was saved during checkpointing in the Trainer with:\n```\ndef _save_rng_state(self, output_dir)...
2025-03-11T16:34:39
2025-05-13T09:41:03
null
### Describe the bug I have a large dataset that I shared into 1024 shards and save on the disk during pre-processing. During training, I load the dataset using load_from_disk() and convert it into an iterable dataset, shuffle it and split the shards to different DDP nodes using the recommended method. However, when the training is resumed mid-epoch, I get thousands of identical warning messages: ``` Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. ``` ### Steps to reproduce the bug 1. Run a multi-node training job using the following python script and interrupt the training after a few seconds to save a mid-epoch checkpoint. ```python #!/usr/bin/env python import os import time from typing import Dict, List import torch import lightning as pl from torch.utils.data import DataLoader from datasets import Dataset from datasets.distributed import split_dataset_by_node import datasets from transformers import AutoTokenizer from more_itertools import flatten, chunked from torchdata.stateful_dataloader import StatefulDataLoader from lightning.pytorch.callbacks.on_exception_checkpoint import ( OnExceptionCheckpoint, ) datasets.logging.set_verbosity_debug() def dummy_generator(): # Generate 60 examples: integers from $0$ to $59$ # 64 sequences of different lengths dataset = [ list(range(3, 10)), list(range(10, 15)), list(range(15, 21)), list(range(21, 27)), list(range(27, 31)), list(range(31, 36)), list(range(36, 45)), list(range(45, 50)), ] for i in range(8): for j, ids in enumerate(dataset): yield {"token_ids": [idx + i * 50 for idx in ids]} def group_texts( examples: Dict[str, List[List[int]]], block_size: int, eos_token_id: int, bos_token_id: int, pad_token_id: int, ) -> Dict[str, List[List[int]]]: real_block_size = block_size - 2 # make space for bos and eos # colapse the sequences into a single list of tokens and then create blocks of real_block_size input_ids = [] attention_mask = [] for block in chunked(flatten(examples["token_ids"]), real_block_size): s = [bos_token_id] + list(block) + [eos_token_id] ls = len(s) attn = [True] * ls s += [pad_token_id] * (block_size - ls) attn += [False] * (block_size - ls) input_ids.append(s) attention_mask.append(attn) return {"input_ids": input_ids, "attention_mask": attention_mask} def collate_fn(batch): return { "input_ids": torch.tensor( [item["input_ids"] for item in batch], dtype=torch.long ), "attention_mask": torch.tensor( [item["attention_mask"] for item in batch], dtype=torch.long ), } class DummyModule(pl.LightningModule): def __init__(self): super().__init__() # A dummy linear layer (not used for actual computation) self.layer = torch.nn.Linear(1, 1) self.ds = None self.prepare_data_per_node = False def on_train_start(self): # This hook is called once training begins on each process. print(f"[Rank {self.global_rank}] Training started.", flush=True) self.data_file = open(f"data_{self.global_rank}.txt", "w") def on_train_end(self): self.data_file.close() def training_step(self, batch, batch_idx): # Print batch information to verify data loading. time.sleep(5) # print("batch", batch, flush=True) print( f"\n[Rank {self.global_rank}] Training step, epoch {self.trainer.current_epoch}, batch {batch_idx}: {batch['input_ids']}", flush=True, ) self.data_file.write( f"[Rank {self.global_rank}] Training step, epoch {self.trainer.current_epoch}, batch {batch_idx}: {batch['input_ids']}\n" ) # Compute a dummy loss (here, simply a constant tensor) loss = torch.tensor(0.0, requires_grad=True) return loss def on_train_epoch_start(self): epoch = self.trainer.current_epoch print( f"[Rank {self.global_rank}] Training epoch {epoch} started.", flush=True, ) self.data_file.write( f"[Rank {self.global_rank}] Training epoch {epoch} started.\n" ) def configure_optimizers(self): # Return a dummy optimizer. return torch.optim.SGD(self.parameters(), lr=0.001) class DM(pl.LightningDataModule): def __init__(self): super().__init__() self.ds = None self.prepare_data_per_node = False def set_epoch(self, epoch: int): self.ds.set_epoch(epoch) def prepare_data(self): # download the dataset dataset = Dataset.from_generator(dummy_generator) # save the dataset dataset.save_to_disk("dataset", num_shards=4) def setup(self, stage: str): # load the dataset ds = datasets.load_from_disk("dataset").to_iterable_dataset( num_shards=4 ) ds = ds.map( group_texts, batched=True, batch_size=5, fn_kwargs={ "block_size": 5, "eos_token_id": 1, "bos_token_id": 0, "pad_token_id": 2, }, remove_columns=["token_ids"], ).shuffle(seed=42, buffer_size=8) ds = split_dataset_by_node( ds, rank=self.trainer.global_rank, world_size=self.trainer.world_size, ) self.ds = ds def train_dataloader(self): print( f"[Rank {self.trainer.global_rank}] Preparing train_dataloader...", flush=True, ) rank = self.trainer.global_rank print( f"[Rank {rank}] Global rank: {self.trainer.global_rank}", flush=True, ) world_size = self.trainer.world_size print(f"[Rank {rank}] World size: {world_size}", flush=True) return StatefulDataLoader( self.ds, batch_size=2, num_workers=2, collate_fn=collate_fn, drop_last=True, persistent_workers=True, ) if __name__ == "__main__": print("Starting Lightning training", flush=True) # Optionally, print some SLURM environment info for debugging. print(f"SLURM_NNODES: {os.environ.get('SLURM_NNODES', '1')}", flush=True) # Determine the number of nodes from SLURM (defaulting to 1 if not set) num_nodes = int(os.environ.get("SLURM_NNODES", "1")) model = DummyModule() dm = DM() on_exception = OnExceptionCheckpoint( dirpath="checkpoints", filename="on_exception", ) # Configure the Trainer to use distributed data parallel (DDP). trainer = pl.Trainer( accelerator="gpu" if torch.cuda.is_available() else "cpu", devices=1, strategy=( "ddp" if num_nodes > 1 else "auto" ), # Use DDP strategy for multi-node training. num_nodes=num_nodes, max_epochs=2, logger=False, enable_checkpointing=True, num_sanity_val_steps=0, enable_progress_bar=False, callbacks=[on_exception], ) # resume (uncomment to resume) # trainer.fit(model, datamodule=dm, ckpt_path="checkpoints/on_exception.ckpt") # train trainer.fit(model, datamodule=dm) ``` ```bash #!/bin/bash #SBATCH --job-name=pl_ddp_test #SBATCH --nodes=2 # Adjust number of nodes as needed #SBATCH --ntasks-per-node=1 # One GPU (process) per node #SBATCH --cpus-per-task=3 # At least as many dataloader workers as required #SBATCH --gres=gpu:1 # Request one GPU per node #SBATCH --time=00:10:00 # Job runtime (adjust as needed) #SBATCH --partition=gpu-preempt # Partition or queue name #SBATCH -o script.out # Disable Python output buffering. export PYTHONUNBUFFERED=1 echo "SLURM job starting on $(date)" echo "Running on nodes: $SLURM_NODELIST" echo "Current directory: $(pwd)" ls -l # Launch the script using srun so that each process starts the Lightning module. srun script.py ``` 2. Uncomment the "resume" line (second to last) and comment the original `trainer.fit` call (last line). It will produce the following log. ``` [Rank 0] Preparing train_dataloader... [Rank 0] Global rank: 0 [Rank 0] World size: 2 [Rank 1] Preparing train_dataloader... [Rank 1] Global rank: 1 [Rank 1] World size: 2 Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Assigning 2 shards (or data sources) of the dataset to each node. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. node#0 dataloader worker#1, ': Starting to iterate over 1/2 shards. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. node#0 dataloader worker#0, ': Starting to iterate over 1/2 shards. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns. Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns. node#0 dataloader worker#1, ': Finished iterating over 1/1 shards. node#0 dataloader worker#0, ': Finished iterating over 1/1 shards. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. [Rank 0] Training started. [Rank 0] Training epoch 0 started. [Rank 0] Training epoch 1 started. Assigning 2 shards (or data sources) of the dataset to each node. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. node#0 dataloader worker#1, ': Starting to iterate over 1/2 shards. node#0 dataloader worker#0, ': Starting to iterate over 1/2 shards. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. node#1 dataloader worker#1, ': Starting to iterate over 1/2 shards. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. node#1 dataloader worker#0, ': Starting to iterate over 1/2 shards. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns. Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns. node#0 dataloader worker#1, ': Finished iterating over 1/1 shards. node#0 dataloader worker#0, ': Finished iterating over 1/1 shards. `Trainer.fit` stopped: `max_epochs=2` reached. Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns. Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns. node#1 dataloader worker#1, ': Finished iterating over 1/1 shards. node#1 dataloader worker#0, ': Finished iterating over 1/1 shards. [Rank 1] Training started. [Rank 1] Training epoch 0 started. [Rank 1] Training epoch 1 started. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. node#1 dataloader worker#1, ': Starting to iterate over 1/2 shards. node#1 dataloader worker#0, ': Starting to iterate over 1/2 shards. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples. Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns. Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns. node#1 dataloader worker#0, ': Finished iterating over 1/1 shards. node#1 dataloader worker#1, ': Finished iterating over 1/1 shards. ``` I'm also attaching the relevant state_dict to make sure that the state is being checkpointed as expected. ``` {'_iterator_finished': True, '_snapshot': {'_last_yielded_worker_id': 1, '_main_snapshot': {'_IterableDataset_len_called': None, '_base_seed': 3992758080362545099, '_index_sampler_state': {'samples_yielded': 64}, '_num_workers': 2, '_sampler_iter_state': None, '_sampler_iter_yielded': 32, '_shared_seed': None}, '_snapshot_step': 32, '_worker_snapshots': {'worker_0': {'dataset_state': {'ex_iterable': {'shard_example_idx': 0, 'shard_idx': 1}, 'num_examples_since_previous_state': 0, 'previous_state': {'shard_example_idx': 0, 'shard_idx': 1}, 'previous_state_example_idx': 33}, 'fetcher_state': {'dataset_iter_state': None, 'fetcher_ended': False}, 'worker_id': 0}, 'worker_1': {'dataset_state': {'ex_iterable': {'shard_example_idx': 0, 'shard_idx': 1}, 'num_examples_since_previous_state': 0, 'previous_state': {'shard_example_idx': 0, 'shard_idx': 1}, 'previous_state_example_idx': 33}, 'fetcher_state': {'dataset_iter_state': None, 'fetcher_ended': False}, 'worker_id': 1}}}, '_steps_since_snapshot': 0} ``` ### Expected behavior Since I'm following all the recommended steps, I don't expect to see any warning when resuming. Am I doing something wrong? Also, can someone explain why I'm seeing 20 identical messages in the log in this reproduction setting? I'm trying to understand why I see thousands of these messages with the actual dataset. One more surprising thing I noticed in the logs is the change in a number of shards per worker. In the following messages, the denominator changes from 2 to 1. ``` node#1 dataloader worker#1, ': Starting to iterate over 1/2 shards. ... node#1 dataloader worker#1, ': Finished iterating over 1/1 shards. ``` ### Environment info python: 3.11.10 datasets: 3.3.2 lightning: 2.3.1
dhruvdcoder
https://github.com/huggingface/datasets/issues/7444
null
false
2,908,585,656
7,443
index error when num_shards > len(dataset)
open
[ "Actually, looking at the code a bit more carefully, maybe an even better solution is to explicitly set `num_shards=len(self)` somewhere inside both `push_to_hub()` and `save_to_disk()` when these functions are invoked with `num_shards > len(dataset)`." ]
2025-03-10T22:40:59
2025-03-10T23:43:08
null
In `ds.push_to_hub()` and `ds.save_to_disk()`, `num_shards` must be smaller than or equal to the number of rows in the dataset, but currently this is not checked anywhere inside these functions. Attempting to invoke these functions with `num_shards > len(dataset)` should raise an informative `ValueError`. I frequently work with datasets with a small number of rows where each row is pretty large, so I often encounter this issue, where the function runs until the shard index in `ds.shard(num_shards, indx)` goes out of bounds. Ideally, a `ValueError` should be raised before reaching this point (i.e. as soon as `ds.push_to_hub()` or `ds.save_to_disk()` is invoked with `num_shards > len(dataset)`). It seems that adding something like: ```python if len(self) < num_shards: raise ValueError(f"num_shards ({num_shards}) must be smaller than or equal to the number of rows in the dataset ({len(self)}). Please either reduce num_shards or increase max_shard_size to make sure num_shards <= len(dataset).") ``` to the beginning of the definition of the `ds.shard()` function [here](https://github.com/huggingface/datasets/blob/f693f4e93aabafa878470c80fd42ddb10ec550d6/src/datasets/arrow_dataset.py#L4728) would deal with this issue for both `ds.push_to_hub()` and `ds.save_to_disk()`, but I'm not exactly sure if this is the best place to raise the `ValueError` (it seems that a more correct way to do it would be to write separate checks for `ds.push_to_hub()` and `ds.save_to_disk()`). I'd be happy to submit a PR if you think something along these lines would be acceptable.
eminorhan
https://github.com/huggingface/datasets/issues/7443
null
false
2,905,543,017
7,442
Flexible Loader
open
[ "Ideally `save_to_disk` should save in a format compatible with load_dataset, wdyt ?", "> Ideally `save_to_disk` should save in a format compatible with load_dataset, wdyt ?\n\nThat would be perfect if not at least a flexible loader.", "@lhoestq For now, you can use this small utility library: [nanoml](https://...
2025-03-09T16:55:03
2025-03-27T23:58:17
null
### Feature request Can we have a utility function that will use `load_from_disk` when given the local path and `load_dataset` if given an HF dataset? It can be something as simple as this one: ``` def load_hf_dataset(path_or_name): if os.path.exists(path_or_name): return load_from_disk(path_or_name) else: return load_dataset(path_or_name) ``` ### Motivation This can be done inside the user codebase, too, but in my experience, it becomes repetitive code. ### Your contribution I can open a pull request.
dipta007
https://github.com/huggingface/datasets/issues/7442
null
false
2,904,702,329
7,441
`drop_last_batch` does not drop the last batch using IterableDataset + interleave_datasets + multi_worker
open
[ "Hi @memray, I’d like to help fix the issue with `drop_last_batch` not working when `num_workers > 1`. I’ll investigate and propose a solution. Thanks!\n", "Thank you very much for offering to help! I also noticed a problem related to a previous issue and left a comment [here](https://github.com/huggingface/datas...
2025-03-08T10:28:44
2025-03-09T21:27:33
null
### Describe the bug See the script below `drop_last_batch=True` is defined using map() for each dataset. The last batch for each dataset is expected to be dropped, id 21-25. The code behaves as expected when num_workers=0 or 1. When using num_workers>1, 'a-11', 'b-11', 'a-12', 'b-12' are gone and instead 21 and 22 are sampled. ### Steps to reproduce the bug ``` from datasets import Dataset from datasets import interleave_datasets from torch.utils.data import DataLoader def convert_to_str(batch, dataset_name): batch['a'] = [f"{dataset_name}-{e}" for e in batch['a']] return batch def gen1(): for ii in range(1, 25): yield {"a": ii} def gen2(): for ii in range(1, 25): yield {"a": ii} # https://github.com/huggingface/datasets/issues/6565 if __name__ == '__main__': dataset1 = Dataset.from_generator(gen1).to_iterable_dataset(num_shards=2) dataset2 = Dataset.from_generator(gen2).to_iterable_dataset(num_shards=2) dataset1 = dataset1.map(lambda x: convert_to_str(x, dataset_name="a"), batched=True, batch_size=10, drop_last_batch=True) dataset2 = dataset2.map(lambda x: convert_to_str(x, dataset_name="b"), batched=True, batch_size=10, drop_last_batch=True) interleaved = interleave_datasets([dataset1, dataset2], stopping_strategy="all_exhausted") print(f"num_workers=0") loader = DataLoader(interleaved, batch_size=5, num_workers=0) i = 0 for b in loader: print(i, b['a']) i += 1 print('=-' * 20) print(f"num_workers=1") loader = DataLoader(interleaved, batch_size=5, num_workers=1) i = 0 for b in loader: print(i, b['a']) i += 1 print('=-' * 20) print(f"num_workers=2") loader = DataLoader(interleaved, batch_size=5, num_workers=2) i = 0 for b in loader: print(i, b['a']) i += 1 print('=-' * 20) print(f"num_workers=3") loader = DataLoader(interleaved, batch_size=5, num_workers=3) i = 0 for b in loader: print(i, b['a']) i += 1 ``` output is: ``` num_workers=0 0 ['a-1', 'b-1', 'a-2', 'b-2', 'a-3'] 1 ['b-3', 'a-4', 'b-4', 'a-5', 'b-5'] 2 ['a-6', 'b-6', 'a-7', 'b-7', 'a-8'] 3 ['b-8', 'a-9', 'b-9', 'a-10', 'b-10'] 4 ['a-11', 'b-11', 'a-12', 'b-12', 'a-13'] 5 ['b-13', 'a-14', 'b-14', 'a-15', 'b-15'] 6 ['a-16', 'b-16', 'a-17', 'b-17', 'a-18'] 7 ['b-18', 'a-19', 'b-19', 'a-20', 'b-20'] =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- num_workers=1 0 ['a-1', 'b-1', 'a-2', 'b-2', 'a-3'] 1 ['b-3', 'a-4', 'b-4', 'a-5', 'b-5'] 2 ['a-6', 'b-6', 'a-7', 'b-7', 'a-8'] 3 ['b-8', 'a-9', 'b-9', 'a-10', 'b-10'] 4 ['a-11', 'b-11', 'a-12', 'b-12', 'a-13'] 5 ['b-13', 'a-14', 'b-14', 'a-15', 'b-15'] 6 ['a-16', 'b-16', 'a-17', 'b-17', 'a-18'] 7 ['b-18', 'a-19', 'b-19', 'a-20', 'b-20'] =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- num_workers=2 0 ['a-1', 'b-1', 'a-2', 'b-2', 'a-3'] 1 ['a-13', 'b-13', 'a-14', 'b-14', 'a-15'] 2 ['b-3', 'a-4', 'b-4', 'a-5', 'b-5'] 3 ['b-15', 'a-16', 'b-16', 'a-17', 'b-17'] 4 ['a-6', 'b-6', 'a-7', 'b-7', 'a-8'] 5 ['a-18', 'b-18', 'a-19', 'b-19', 'a-20'] 6 ['b-8', 'a-9', 'b-9', 'a-10', 'b-10'] 7 ['b-20', 'a-21', 'b-21', 'a-22', 'b-22'] =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- num_workers=3 Too many dataloader workers: 3 (max is dataset.num_shards=2). Stopping 1 dataloader workers. 0 ['a-1', 'b-1', 'a-2', 'b-2', 'a-3'] 1 ['a-13', 'b-13', 'a-14', 'b-14', 'a-15'] 2 ['b-3', 'a-4', 'b-4', 'a-5', 'b-5'] 3 ['b-15', 'a-16', 'b-16', 'a-17', 'b-17'] 4 ['a-6', 'b-6', 'a-7', 'b-7', 'a-8'] 5 ['a-18', 'b-18', 'a-19', 'b-19', 'a-20'] 6 ['b-8', 'a-9', 'b-9', 'a-10', 'b-10'] 7 ['b-20', 'a-21', 'b-21', 'a-22', 'b-22'] ``` ### Expected behavior `'a-21', 'b-21', 'a-22', 'b-22'` should be dropped ### Environment info - `datasets` version: 3.3.2 - Platform: Linux-5.15.0-1056-aws-x86_64-with-glibc2.31 - Python version: 3.10.16 - `huggingface_hub` version: 0.28.0 - PyArrow version: 19.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1
memray
https://github.com/huggingface/datasets/issues/7441
null
false
2,903,740,662
7,440
IterableDataset raises FileNotFoundError instead of retrying
open
[ "I have since been training more models with identical architectures over the same dataset, and it is completely unstable. One has now failed at chunk9/1215, whilst others have gotten past that.\n```python\nFileNotFoundError: zstd://example_train_1215.jsonl::hf://datasets/cerebras/SlimPajama-627B@2d0accdd58c5d55119...
2025-03-07T19:14:18
2025-07-22T08:15:44
null
### Describe the bug In https://github.com/huggingface/datasets/issues/6843 it was noted that the streaming feature of `datasets` is highly susceptible to outages and doesn't back off for long (or even *at all*). I was training a model while streaming SlimPajama and training crashed with a `FileNotFoundError`. I can only assume that this was due to a momentary outage considering the file in question, `train/chunk9/example_train_3889.jsonl.zst`, [exists like all other files in SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B/blob/main/train/chunk9/example_train_3889.jsonl.zst). ```python ... File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 2226, in __iter__ for key, example in ex_iterable: File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1499, in __iter__ for x in self.ex_iterable: File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1067, in __iter__ yield from self._iter() File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1231, in _iter for key, transformed_example in iter_outputs(): File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1207, in iter_outputs for i, key_example in inputs_iterator: File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1111, in iter_inputs for key, example in iterator: File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 371, in __iter__ for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/packaged_modules/json/json.py", line 99, in _generate_tables for file_idx, file in enumerate(itertools.chain.from_iterable(files)): File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/utils/track.py", line 50, in __iter__ for x in self.generator(*self.args): File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/utils/file_utils.py", line 1378, in _iter_from_urlpaths raise FileNotFoundError(urlpath) FileNotFoundError: zstd://example_train_3889.jsonl::hf://datasets/cerebras/SlimPajama-627B@2d0accdd58c5d5511943ca1f5ff0e3eb5e293543/train/chunk9/example_train_3889.jsonl.zst ``` That final `raise` is at the bottom of the following snippet: https://github.com/huggingface/datasets/blob/f693f4e93aabafa878470c80fd42ddb10ec550d6/src/datasets/utils/file_utils.py#L1354-L1379 So clearly, something choked up in `xisfile`. ### Steps to reproduce the bug This happens when streaming a dataset and iterating over it. In my case, that iteration is done in Trainer's `inner_training_loop`, but this is not relevant to the iterator. ```python File "/miniconda3/envs/draft/lib/python3.11/site-packages/accelerate/data_loader.py", line 835, in __iter__ next_batch, next_batch_info = self._fetch_batches(main_iterator) ``` ### Expected behavior This bug and the linked issue have one thing in common: *when streaming fails to retrieve an example, the entire program gives up and crashes*. As users, we cannot even protect ourselves from this: when we are iterating over a dataset, we can't make `datasets` skip over a bad example or wait a little longer to retry the iteration, because when a Python generator/iterator raises an error, it loses all its context. In other words: if you have something that looks like `for b in a: for c in b: for d in c:`, errors in the innermost loop can only be caught by a `try ... except` in `c.__iter__()`. There should be such exception handling in `datasets` and it should have a **configurable exponential back-off**: first wait and retry after 1 minute, then 2 minutes, then 4 minutes, then 8 minutes, ... and after a given amount of retries, **skip the bad example**, and **only after** skipping a given amount of examples, give up and crash. This was requested in https://github.com/huggingface/datasets/issues/6843 too, since currently there is only linear backoff *and* it is clearly not applied to `xisfile`. ### Environment info - `datasets` version: 3.3.2 *(the latest version)* - Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28 - Python version: 3.11.7 - `huggingface_hub` version: 0.26.5 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2024.10.0
bauwenst
https://github.com/huggingface/datasets/issues/7440
null
false
2,900,143,289
7,439
Fix multi gpu process example
closed
[ "Okay nevermind looks like to works both ways for models. but my doubt still remains, isnt this changing the device of the model every batch?" ]
2025-03-06T11:29:19
2025-03-06T17:07:28
2025-03-06T17:06:38
to is not an inplace function. But i am not sure about this code anyway, i think this is modifying the global variable `model` everytime the function is called? Which is on every batch? So it is juggling the same model on every gpu right? Isnt that very inefficient?
SwayStar123
https://github.com/huggingface/datasets/pull/7439
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7439", "html_url": "https://github.com/huggingface/datasets/pull/7439", "diff_url": "https://github.com/huggingface/datasets/pull/7439.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7439.patch", "merged_at": null }
true
2,899,209,484
7,438
Allow dataset row indexing with np.int types (#7423)
closed
[ "+1", "@lhoestq can you take a look at this?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7438). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thank you" ]
2025-03-06T03:10:43
2025-07-23T17:56:22
2025-07-23T16:44:42
@lhoestq Proposed fix for #7423. Added a couple simple tests as requested. I had some test failures related to Java and pyspark even when installing with dev but these don't seem to be related to the changes here and fail for me even on clean main. The typeerror raised when using the wrong type is: "Wrong key type: '{key}' of type '{type(key)}'. Expected one of int, slice, range, str or Iterable." I think that is fine. But I could modify the int part to something more generic (although I'm not sure what) if wanted.
DavidRConnell
https://github.com/huggingface/datasets/pull/7438
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7438", "html_url": "https://github.com/huggingface/datasets/pull/7438", "diff_url": "https://github.com/huggingface/datasets/pull/7438.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7438.patch", "merged_at": "2025-07-23T16:44:42" }
true
2,899,104,679
7,437
Use pyupgrade --py39-plus for remaining files
open
[ "@lhoestq Have a look?" ]
2025-03-06T02:12:25
2025-07-30T08:34:34
null
This work follows #7428. And "requires-python" is set in pyproject.toml
cyyever
https://github.com/huggingface/datasets/pull/7437
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7437", "html_url": "https://github.com/huggingface/datasets/pull/7437", "diff_url": "https://github.com/huggingface/datasets/pull/7437.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7437.patch", "merged_at": null }
true
2,898,385,725
7,436
chore: fix typos
closed
[]
2025-03-05T20:17:54
2025-04-28T14:00:09
2025-04-28T13:51:26
null
afuetterer
https://github.com/huggingface/datasets/pull/7436
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7436", "html_url": "https://github.com/huggingface/datasets/pull/7436", "diff_url": "https://github.com/huggingface/datasets/pull/7436.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7436.patch", "merged_at": "2025-04-28T13:51:26" }
true
2,895,536,956
7,435
Refactor `string_to_dict` to return `None` if there is no match instead of raising `ValueError`
closed
[ "cc: @lhoestq ", "I am going to rebase #7434 onto this branch. Then we can merge this one first if you approve, and then #7434.", "@lhoestq any thoughts here?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7435). All of your documentation changes will be reflected on ...
2025-03-04T22:01:20
2025-03-12T16:52:00
2025-03-12T16:52:00
Making this change, as encouraged here: * https://github.com/huggingface/datasets/pull/7434#discussion_r1979933054 instead of having the pattern of using `try`-`except` to handle when there is no match, we can instead check if the return value is `None`; we can also assert that the return value should not be `None` if we know that should be true
ringohoffman
https://github.com/huggingface/datasets/pull/7435
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7435", "html_url": "https://github.com/huggingface/datasets/pull/7435", "diff_url": "https://github.com/huggingface/datasets/pull/7435.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7435.patch", "merged_at": "2025-03-12T16:51:59" }
true
2,893,075,908
7,434
Refactor `Dataset.map` to reuse cache files mapped with different `num_proc`
closed
[ "@lhoestq please let me know what you think about this.", "It looks like I can't change the merge target to #7435, so it will look like there is a bunch of extra stuff until #7435 is in main.", "@lhoestq Thanks so much for reviewing #7435! Now that that's merged, I think this PR is ready!! Can you kick off CI w...
2025-03-04T06:12:37
2025-05-14T10:45:10
2025-05-12T15:14:08
Fixes #7433 This refactor unifies `num_proc is None or num_proc == 1` and `num_proc > 1`; instead of handling them completely separately where one uses a list of kwargs and shards and the other just uses a single set of kwargs and `self`, by wrapping the `num_proc == 1` case in a list and making the difference just whether or not you use a pool, you set up either case to be able to load each other's cache files just by changing `num_shards`; `num_proc == 1` can sequentially load the shards of a dataset mapped `num_shards > 1` and map any missing shards Other than the structural refactor, the main contribution of this PR is `existing_cache_file_map`, which uses a regex of `cache_file_name` and `suffix_template` to find existing cache files, grouped by their `num_shards`; using this data structure, we can reset `num_shards` to an existing set of cache files, and load them accordingly
ringohoffman
https://github.com/huggingface/datasets/pull/7434
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7434", "html_url": "https://github.com/huggingface/datasets/pull/7434", "diff_url": "https://github.com/huggingface/datasets/pull/7434.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7434.patch", "merged_at": "2025-05-12T15:14:08" }
true
2,890,240,400
7,433
`Dataset.map` ignores existing caches and remaps when ran with different `num_proc`
closed
[ "This feels related: https://github.com/huggingface/datasets/issues/3044", "@lhoestq This comment specifically, I agree:\n\n* https://github.com/huggingface/datasets/issues/3044#issuecomment-1239877570\n\n> Almost a year later and I'm in a similar boat. Using custom fingerprints and when using multiprocessing the...
2025-03-03T05:51:26
2025-05-12T15:14:09
2025-05-12T15:14:09
### Describe the bug If you `map` a dataset and save it to a specific `cache_file_name` with a specific `num_proc`, and then call map again with that same existing `cache_file_name` but a different `num_proc`, the dataset will be re-mapped. ### Steps to reproduce the bug 1. Download a dataset ```python import datasets dataset = datasets.load_dataset("ylecun/mnist") ``` ``` Generating train split: 100%|██████████| 60000/60000 [00:00<00:00, 116429.85 examples/s] Generating test split: 100%|██████████| 10000/10000 [00:00<00:00, 103310.27 examples/s] ``` 2. `map` and cache it with a specific `num_proc` ```python cache_file_name="./cache/train.map" dataset["train"].map(lambda x: x, cache_file_name=cache_file_name, num_proc=2) ``` ``` Map (num_proc=2): 100%|██████████| 60000/60000 [00:01<00:00, 53764.03 examples/s] ``` 3. `map` it with a different `num_proc` and the same `cache_file_name` as before ```python dataset["train"].map(lambda x: x, cache_file_name=cache_file_name, num_proc=3) ``` ``` Map (num_proc=3): 100%|██████████| 60000/60000 [00:00<00:00, 65377.12 examples/s] ``` ### Expected behavior If I specify an existing `cache_file_name`, I don't expect using a different `num_proc` than the one that was used to generate it to cause the dataset to have be be re-mapped. ### Environment info ```console $ datasets-cli env - `datasets` version: 3.3.2 - Platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35 - Python version: 3.10.16 - `huggingface_hub` version: 0.29.1 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0 ```
ringohoffman
https://github.com/huggingface/datasets/issues/7433
null
false
2,887,717,289
7,432
Fix type annotation
closed
[ "Thanks ! There is https://github.com/huggingface/datasets/pull/7426 already that fixes the issue, I'm closing your PR if you don't mind" ]
2025-02-28T17:28:20
2025-03-04T15:53:03
2025-03-04T15:53:03
null
NeilGirdhar
https://github.com/huggingface/datasets/pull/7432
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7432", "html_url": "https://github.com/huggingface/datasets/pull/7432", "diff_url": "https://github.com/huggingface/datasets/pull/7432.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7432.patch", "merged_at": null }
true
2,887,244,074
7,431
Issues with large Datasets
open
[ "what's the error message ?", "This was the final error message that it was giving pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0", "Here is the list of errors:\n\nTraceback (most recent call last):\n File \".venv/lib/python3.12/site-packages/datasets/packaged_modul...
2025-02-28T14:05:22
2025-03-04T15:02:26
null
### Describe the bug If the coco annotation file is too large the dataset will not be able to load it, not entirely sure were the issue is but I am guessing it is due to the code trying to load it all as one line into a dataframe. This was for object detections. My current work around is the following code but would be nice to be able to do it without worrying about it also probably there is a better way of doing it: ` dataset_dict = json.load(open("./local_data/annotations/train.json")) df = pd.DataFrame(columns=['images', 'annotations', 'categories']) df = df._append({'images': dataset_dict['images'], 'annotations': dataset_dict['annotations'], 'categories': dataset_dict['categories']}, ignore_index=True) train=Dataset.from_pandas(df) dataset_dict = json.load(open("./local_data/annotations/validation.json")) df = pd.DataFrame(columns=['images', 'annotations', 'categories']) df = df._append({'images': dataset_dict['images'], 'annotations': dataset_dict['annotations'], 'categories': dataset_dict['categories']}, ignore_index=True) val = Dataset.from_pandas(df) dataset_dict = json.load(open("./local_data/annotations/test.json")) df = pd.DataFrame(columns=['images', 'annotations', 'categories']) df = df._append({'images': dataset_dict['images'], 'annotations': dataset_dict['annotations'], 'categories': dataset_dict['categories']}, ignore_index=True) test = Dataset.from_pandas(df) dataset = DatasetDict({'train': train, 'validation': val, 'test': test}) ` ### Steps to reproduce the bug 1) step up directory in and have the json files in coco format -local_data |-images |---1.jpg |---2.jpg |---.... |---n.jpg |-annotations |---test.json |---train.json |---validation.json 2) try to load local_data into a dataset if the file is larger than about 300kb it will cause an error. ### Expected behavior That it loads the jsons preferably in the same format as it has done with a smaller size. ### Environment info - `datasets` version: 3.3.3.dev0 - Platform: Linux-6.11.0-17-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - `huggingface_hub` version: 0.29.0 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
nikitabelooussovbtis
https://github.com/huggingface/datasets/issues/7431
null
false
2,886,922,573
7,430
Error in code "Time to slice and dice" from course "NLP Course"
closed
[ "You should open an issue in the NLP course website / github page. I'm closing this issue if you don't mind", "ok, i don't mind, i'll mark the error there" ]
2025-02-28T11:36:10
2025-03-05T11:32:47
2025-03-03T17:52:15
### Describe the bug When we execute code ``` frequencies = ( train_df["condition"] .value_counts() .to_frame() .reset_index() .rename(columns={"index": "condition", "condition": "frequency"}) ) frequencies.head() ``` answer should be like this condition | frequency birth control | 27655 depression | 8023 acne | 5209 anxiety | 4991 pain | 4744 but he is different frequency | count birth control | 27655 depression | 8023 acne | 5209 anxiety | 4991 pain | 4744 this is not correct, correct code ``` frequencies = ( train_df["condition"] .value_counts() .to_frame() .reset_index() .rename(columns={"index": "condition", "count": "frequency"}) ) ```` ### Steps to reproduce the bug ``` frequencies = ( train_df["condition"] .value_counts() .to_frame() .reset_index() .rename(columns={"index": "condition", "condition": "frequency"}) ) frequencies.head() ``` ### Expected behavior condition | frequency birth control | 27655 depression | 8023 acne | 5209 anxiety | 4991 pain | 4744 ### Environment info Google Colab
Yurkmez
https://github.com/huggingface/datasets/issues/7430
null
false
2,886,806,513
7,429
Improved type annotation
open
[ "@lhoestq Could someone please take a quick look or let me know if there’s anything I should change? Thanks!", "could you fix the conflicts ? I think some type annotations have been improved since your first commit", "It should be good now.\r\nI'm happy to add more annotations or refine further if needed—just ...
2025-02-28T10:39:10
2025-05-15T12:27:17
null
I've refined several type annotations throughout the codebase to align with current best practices and enhance overall clarity. Given the complexity of the code, there may still be areas that need further attention. I welcome any feedback or suggestions to make these improvements even better. - Fixes #7202
saiden89
https://github.com/huggingface/datasets/pull/7429
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7429", "html_url": "https://github.com/huggingface/datasets/pull/7429", "diff_url": "https://github.com/huggingface/datasets/pull/7429.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7429.patch", "merged_at": null }
true
2,886,111,651
7,428
Use pyupgrade --py39-plus
closed
[ "Hi ! can you run `make style` to fix code formatting ?", "@lhoestq Fixed", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7428). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-02-28T03:39:44
2025-03-22T00:51:20
2025-03-05T15:04:16
null
cyyever
https://github.com/huggingface/datasets/pull/7428
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7428", "html_url": "https://github.com/huggingface/datasets/pull/7428", "diff_url": "https://github.com/huggingface/datasets/pull/7428.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7428.patch", "merged_at": "2025-03-05T15:04:16" }
true
2,886,032,571
7,427
Error splitting the input into NAL units.
open
[ "First time I see this error :/ maybe it's an issue with your version of `multiprocess` and `dill` ? Make sure they are compatible with `datasets`", "> First time I see this error :/ maybe it's an issue with your version of `multiprocess` and `dill` ? Make sure they are compatible with `datasets`\n\nany recommend...
2025-02-28T02:30:15
2025-03-04T01:40:28
null
### Describe the bug I am trying to finetune qwen2.5-vl on 16 * 80G GPUS, and I use `LLaMA-Factory` and set `preprocessing_num_workers=16`. However, I met the following error and the program seem to got crush. It seems that the error come from `datasets` library The error logging is like following: ```text Converting format of dataset (num_proc=16): 100%|█████████▉| 19265/19267 [11:44<00:00, 5.88 examples/s] Converting format of dataset (num_proc=16): 100%|█████████▉| 19266/19267 [11:44<00:00, 5.02 examples/s] Converting format of dataset (num_proc=16): 100%|██████████| 19267/19267 [11:44<00:00, 5.44 examples/s] Converting format of dataset (num_proc=16): 100%|██████████| 19267/19267 [11:44<00:00, 27.34 examples/s] Running tokenizer on dataset (num_proc=16): 0%| | 0/19267 [00:00<?, ? examples/s] Invalid NAL unit size (45405 > 35540). Invalid NAL unit size (86720 > 54856). Invalid NAL unit size (7131 > 3225). missing picture in access unit with size 54860 Invalid NAL unit size (48042 > 33645). missing picture in access unit with size 3229 missing picture in access unit with size 33649 Invalid NAL unit size (86720 > 54856). Invalid NAL unit size (48042 > 33645). Error splitting the input into NAL units. missing picture in access unit with size 35544 Invalid NAL unit size (45405 > 35540). Error splitting the input into NAL units. Error splitting the input into NAL units. Invalid NAL unit size (8187 > 7069). missing picture in access unit with size 7073 Invalid NAL unit size (8187 > 7069). Error splitting the input into NAL units. Invalid NAL unit size (7131 > 3225). Error splitting the input into NAL units. Invalid NAL unit size (14013 > 5998). missing picture in access unit with size 6002 Invalid NAL unit size (14013 > 5998). Error splitting the input into NAL units. Invalid NAL unit size (17173 > 7231). missing picture in access unit with size 7235 Invalid NAL unit size (17173 > 7231). Error splitting the input into NAL units. Invalid NAL unit size (16964 > 6055). missing picture in access unit with size 6059 Invalid NAL unit size (16964 > 6055). Exception in thread Thread-9 (accepter)Error splitting the input into NAL units. : Traceback (most recent call last): File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 1016, in _bootstrap_inner Running tokenizer on dataset (num_proc=16): 0%| | 0/19267 [13:22<?, ? examples/s] self.run() File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 953, in run Invalid NAL unit size (7032 > 2927). missing picture in access unit with size 2931 self._target(*self._args, **self._kwargs) File "/opt/conda/envs/python3.10.13/lib/python3.10/site-packages/multiprocess/managers.py", line 194, in accepter Invalid NAL unit size (7032 > 2927). Error splitting the input into NAL units. t.start() File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 935, in start Invalid NAL unit size (28973 > 6121). missing picture in access unit with size 6125 _start_new_thread(self._bootstrap, ())Invalid NAL unit size (28973 > 6121). RuntimeError: can't start new threadError splitting the input into NAL units. Invalid NAL unit size (4411 > 296). missing picture in access unit with size 300 Invalid NAL unit size (4411 > 296). Error splitting the input into NAL units. Invalid NAL unit size (14414 > 1471). missing picture in access unit with size 1475 Invalid NAL unit size (14414 > 1471). Error splitting the input into NAL units. Invalid NAL unit size (5283 > 1792). missing picture in access unit with size 1796 Invalid NAL unit size (5283 > 1792). Error splitting the input into NAL units. Invalid NAL unit size (79147 > 10042). missing picture in access unit with size 10046 Invalid NAL unit size (79147 > 10042). Error splitting the input into NAL units. Invalid NAL unit size (45405 > 35540). Invalid NAL unit size (86720 > 54856). Invalid NAL unit size (7131 > 3225). missing picture in access unit with size 54860 Invalid NAL unit size (48042 > 33645). missing picture in access unit with size 3229 missing picture in access unit with size 33649 Invalid NAL unit size (86720 > 54856). Invalid NAL unit size (48042 > 33645). Error splitting the input into NAL units. missing picture in access unit with size 35544 Invalid NAL unit size (45405 > 35540). Error splitting the input into NAL units. Error splitting the input into NAL units. Invalid NAL unit size (8187 > 7069). missing picture in access unit with size 7073 Invalid NAL unit size (8187 > 7069). Error splitting the input into NAL units. Invalid NAL unit size (7131 > 3225). Error splitting the input into NAL units. Invalid NAL unit size (14013 > 5998). missing picture in access unit with size 6002 Invalid NAL unit size (14013 > 5998). Error splitting the input into NAL units. Invalid NAL unit size (17173 > 7231). missing picture in access unit with size 7235 Invalid NAL unit size (17173 > 7231). Error splitting the input into NAL units. Invalid NAL unit size (16964 > 6055). missing picture in access unit with size 6059 Invalid NAL unit size (16964 > 6055). Exception in thread Thread-9 (accepter)Error splitting the input into NAL units. : Traceback (most recent call last): File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 1016, in _bootstrap_inner Running tokenizer on dataset (num_proc=16): 0%| | 0/19267 [13:22<?, ? examples/s] self.run() File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 953, in run Invalid NAL unit size (7032 > 2927). missing picture in access unit with size 2931 self._target(*self._args, **self._kwargs) File "/opt/conda/envs/python3.10.13/lib/python3.10/site-packages/multiprocess/managers.py", line 194, in accepter Invalid NAL unit size (7032 > 2927). Error splitting the input into NAL units. t.start() File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 935, in start Invalid NAL unit size (28973 > 6121). missing picture in access unit with size 6125 _start_new_thread(self._bootstrap, ())Invalid NAL unit size (28973 > 6121). RuntimeError: can't start new threadError splitting the input into NAL units. Invalid NAL unit size (4411 > 296). missing picture in access unit with size 300 Invalid NAL unit size (4411 > 296). Error splitting the input into NAL units. Invalid NAL unit size (14414 > 1471). missing picture in access unit with size 1475 Invalid NAL unit size (14414 > 1471). Error splitting the input into NAL units. Invalid NAL unit size (5283 > 1792). missing picture in access unit with size 1796 Invalid NAL unit size (5283 > 1792). Error splitting the input into NAL units. Invalid NAL unit size (79147 > 10042). missing picture in access unit with size 10046 Invalid NAL unit size (79147 > 10042). Error splitting the input into NAL units. Invalid NAL unit size (45405 > 35540). Invalid NAL unit size (86720 > 54856). Invalid NAL unit size (7131 > 3225). missing picture in access unit with size 54860 Invalid NAL unit size (48042 > 33645). missing picture in access unit with size 3229 missing picture in access unit with size 33649 Invalid NAL unit size (86720 > 54856). Invalid NAL unit size (48042 > 33645). Error splitting the input into NAL units. missing picture in access unit with size 35544 Invalid NAL unit size (45405 > 35540). Error splitting the input into NAL units. Error splitting the input into NAL units. Invalid NAL unit size (8187 > 7069). missing picture in access unit with size 7073 Invalid NAL unit size (8187 > 7069). Error splitting the input into NAL units. Invalid NAL unit size (7131 > 3225). Error splitting the input into NAL units. Invalid NAL unit size (14013 > 5998). missing picture in access unit with size 6002 Invalid NAL unit size (14013 > 5998). Error splitting the input into NAL units. Invalid NAL unit size (17173 > 7231). missing picture in access unit with size 7235 Invalid NAL unit size (17173 > 7231). Error splitting the input into NAL units. Invalid NAL unit size (16964 > 6055). missing picture in access unit with size 6059 Invalid NAL unit size (16964 > 6055). Exception in thread Thread-9 (accepter)Error splitting the input into NAL units. : Traceback (most recent call last): File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 1016, in _bootstrap_inner Running tokenizer on dataset (num_proc=16): 0%| | 0/19267 [13:22<?, ? examples/s] self.run() File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 953, in run Invalid NAL unit size (7032 > 2927). missing picture in access unit with size 2931 self._target(*self._args, **self._kwargs) File "/opt/conda/envs/python3.10.13/lib/python3.10/site-packages/multiprocess/managers.py", line 194, in accepter Invalid NAL unit size (7032 > 2927). Error splitting the input into NAL units. t.start() File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 935, in start Invalid NAL unit size (28973 > 6121). missing picture in access unit with size 6125 _start_new_thread(self._bootstrap, ())Invalid NAL unit size (28973 > 6121). RuntimeError: can't start new threadError splitting the input into NAL units. Invalid NAL unit size (4411 > 296). missing picture in access unit with size 300 Invalid NAL unit size (4411 > 296). Error splitting the input into NAL units. Invalid NAL unit size (14414 > 1471). missing picture in access unit with size 1475 Invalid NAL unit size (14414 > 1471). Error splitting the input into NAL units. Invalid NAL unit size (5283 > 1792). missing picture in access unit with size 1796 Invalid NAL unit size (5283 > 1792). Error splitting the input into NAL units. Invalid NAL unit size (79147 > 10042). missing picture in access unit with size 10046 Invalid NAL unit size (79147 > 10042). Error splitting the input into NAL units. Invalid NAL unit size (45405 > 35540). Invalid NAL unit size (86720 > 54856). Invalid NAL unit size (7131 > 3225). missing picture in access unit with size 54860 Invalid NAL unit size (48042 > 33645). missing picture in access unit with size 3229 missing picture in access unit with size 33649 Invalid NAL unit size (86720 > 54856). Invalid NAL unit size (48042 > 33645). Error splitting the input into NAL units. missing picture in access unit with size 35544 Invalid NAL unit size (45405 > 35540). Error splitting the input into NAL units. Error splitting the input into NAL units. Invalid NAL unit size (8187 > 7069). missing picture in access unit with size 7073 Invalid NAL unit size (8187 > 7069). Error splitting the input into NAL units. Invalid NAL unit size (7131 > 3225). Error splitting the input into NAL units. Invalid NAL unit size (14013 > 5998). missing picture in access unit with size 6002 Invalid NAL unit size (14013 > 5998). Error splitting the input into NAL units. Invalid NAL unit size (17173 > 7231). missing picture in access unit with size 7235 Invalid NAL unit size (17173 > 7231). Error splitting the input into NAL units. Invalid NAL unit size (16964 > 6055). missing picture in access unit with size 6059 Invalid NAL unit size (16964 > 6055). Exception in thread Thread-9 (accepter)Error splitting the input into NAL units. : Traceback (most recent call last): File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 1016, in _bootstrap_inner Running tokenizer on dataset (num_proc=16): 0%| | 0/19267 [13:22<?, ? examples/s] self.run() File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 953, in run Invalid NAL unit size (7032 > 2927). missing picture in access unit with size 2931 self._target(*self._args, **self._kwargs) File "/opt/conda/envs/python3.10.13/lib/python3.10/site-packages/multiprocess/managers.py", line 194, in accepter Invalid NAL unit size (7032 > 2927). Error splitting the input into NAL units. t.start() File "/opt/conda/envs/python3.10.13/lib/python3.10/threading.py", line 935, in start Invalid NAL unit size (28973 > 6121). missing picture in access unit with size 6125 _start_new_thread(self._bootstrap, ())Invalid NAL unit size (28973 > 6121). RuntimeError: can't start new threadError splitting the input into NAL units. Invalid NAL unit size (4411 > 296). missing picture in access unit with size 300 Invalid NAL unit size (4411 > 296). Error splitting the input into NAL units. Invalid NAL unit size (14414 > 1471). missing picture in access unit with size 1475 Invalid NAL unit size (14414 > 1471). Error splitting the input into NAL units. Invalid NAL unit size (5283 > 1792). missing picture in access unit with size 1796 Invalid NAL unit size (5283 > 1792). Error splitting the input into NAL units. Invalid NAL unit size (79147 > 10042). missing picture in access unit with size 10046 Invalid NAL unit size (79147 > 10042). Error splitting the input into NAL units. ``` ### Others _No response_ ### Steps to reproduce the bug None ### Expected behavior excpect to run successfully ### Environment info ``` transformers==4.49.0 datasets==3.2.0 accelerate==1.2.1 peft==0.12.0 trl==0.9.6 tokenizers==0.21.0 gradio>=4.38.0,<=5.18.0 pandas>=2.0.0 scipy einops sentencepiece tiktoken protobuf uvicorn pydantic fastapi sse-starlette matplotlib>=3.7.0 fire packaging pyyaml numpy<2.0.0 av librosa tyro<0.9.0 openlm-hub qwen-vl-utils ```
MengHao666
https://github.com/huggingface/datasets/issues/7427
null
false
2,883,754,507
7,426
fix: None default with bool type on load creates typing error
closed
[]
2025-02-27T08:11:36
2025-03-04T15:53:40
2025-03-04T15:53:40
Hello! Pyright flags any use of `load_dataset` as an error, because the default for `trust_remote_code` is `None`, but the function is typed as `bool`, not `Optional[bool]`. I changed the type and docstrings to reflect this, but no other code was touched.
stephantul
https://github.com/huggingface/datasets/pull/7426
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7426", "html_url": "https://github.com/huggingface/datasets/pull/7426", "diff_url": "https://github.com/huggingface/datasets/pull/7426.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7426.patch", "merged_at": "2025-03-04T15:53:40" }
true
2,883,684,686
7,425
load_dataset("livecodebench/code_generation_lite", version_tag="release_v2") TypeError: 'NoneType' object is not callable
open
[ "> datasets\n\nHi, have you solved this bug? Today I also met the same problem about `livecodebench/code_generation_lite` when evaluating the `Open-R1` repo. I am looking forward to your reply!\n\n![Image](https://github.com/user-attachments/assets/02e92fbf-da33-41b3-b8d4-f79b293a54f1)", "Hey guys,\nI tried to re...
2025-02-27T07:36:02
2025-03-27T05:05:33
null
### Describe the bug from datasets import load_dataset lcb_codegen = load_dataset("livecodebench/code_generation_lite", version_tag="release_v2") or configs = get_dataset_config_names("livecodebench/code_generation_lite", trust_remote_code=True) both error: Traceback (most recent call last): File "", line 1, in File "/workspace/miniconda/envs/grpo/lib/python3.10/site-packages/datasets/load.py", line 2131, in load_dataset builder_instance = load_dataset_builder( File "/workspace/miniconda/envs/grpo/lib/python3.10/site-packages/datasets/load.py", line 1888, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( TypeError: 'NoneType' object is not callable ### Steps to reproduce the bug from datasets import get_dataset_config_names configs = get_dataset_config_names("livecodebench/code_generation_lite", trust_remote_code=True) OR lcb_codegen = load_dataset("livecodebench/code_generation_lite", version_tag="release_v2") ### Expected behavior load datasets livecodebench/code_generation_lite ### Environment info import datasets version '3.3.2'
dshwei
https://github.com/huggingface/datasets/issues/7425
null
false
2,882,663,621
7,424
Faster folder based builder + parquet support + allow repeated media + use torchvideo
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7424). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-02-26T19:55:18
2025-03-05T18:51:00
2025-03-05T17:41:23
This will be useful for LeRobotDataset (robotics datasets for [lerobot](https://github.com/huggingface/lerobot) based on videos) Impacted builders: - ImageFolder - AudioFolder - VideoFolder Improvements: - faster to stream (got a 5x speed up on an image dataset) - improved RAM usage - support for metadata.parquet - allow to link to an image/audio/video multiple times - support for pyarrow filters (mostly efficient for parquet) - link to files using fields names `*_file_name` (in addition to the already existing `file_name`) - this allows to have multiple image/audio/video per row - there is also `file_names` and `*_file_names` for lists of image/audio/video Changes: - the builders iterate on the metadata files instead of the media files - the builders iterate on chunks of metadata instead of loading them in RAM completely - metadata files are no longer handled separately in `data_files` - added the `filters` argument to pass to `load_dataset` - either as an [Expression](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.Expression.html) - or as tuples like `filters=[('event_name', '=', 'SomeEvent')]` - small breaking change: you can't add labels to a dataset with`drop_labels=False` if it has a metadata file - small breaking change: you can't use one metadata file for multiple splits anymore Example: `lhoestq/pusht-videofolder` is a video dataset with metadata.parquet where multiple rows can point to the same video ```python In [1]: from datasets import load_dataset In [2]: load_dataset("lhoestq/pusht-videofolder") Resolving data files: 100%|██████████████████████████████| 207/207 [00:00<00:00, 1087.32it/s] Out[2]: DatasetDict({ train: Dataset({ features: ['video', 'observation.state', 'action', 'episode_index', 'frame_index', 'timestamp', 'next.reward', 'next.done', 'next.success', 'index', 'task_index'], num_rows: 25650 }) }) In [3]: load_dataset("lhoestq/pusht-videofolder", filters=[("next.reward", ">", 0.5)]) Resolving data files: 100%|██████████████████████████████| 207/207 [00:01<00:00, 183.03it/s] Out[3]: DatasetDict({ train: Dataset({ features: ['video', 'observation.state', 'action', 'episode_index', 'frame_index', 'timestamp', 'next.reward', 'next.done', 'next.success', 'index', 'task_index'], num_rows: 5773 }) }) ``` Additional change for VideoFolder: - decord can't be installed in many setups, I switched the backend to torchvision instead - I also added streaming capability from HF (you can get video frames without downloading the full video from HF) Example: load a robotics dataset ```python In [1]: from datasets import load_dataset ds In [2]: ds = load_dataset("lhoestq/pusht-videofolder") Resolving data files: 100%|██████████████████████████████| 207/207 [00:00<00:00, 624.81it/s] In [3]: ds["train"][0] Out[3]: {'video': <torchvision.io.video_reader.VideoReader at 0x1145dc290>, 'observation.state': [222.0, 97.0], 'action': [233.0, 71.0], 'episode_index': 0, 'frame_index': 0, 'timestamp': 0.0, 'next.reward': 0.19029748439788818, 'next.done': False, 'next.success': False, 'index': 0, 'task_index': 0} ``` Example: stream frames without downloading full videos ```python In [1]: from datasets import load_dataset In [2]: ds = load_dataset("BrianGuo/Tennis_Data", streaming=True) In [3]: example = next(iter(ds["train"])) In [4]: video = example["video"] In [5]: video.get_metadata() Out[5]: {'audio': {'framerate': [44100.0], 'duration': [2027.35]}, 'video': {'fps': [59.00002712894387], 'duration': [2027.355]}} In [6]: video.seek(1800, keyframes_only=True) # 30min Out[6]: <torchvision.io.video_reader.VideoReader at 0x148d4d010> In [7]: next(video) Out[7]: {'data': tensor([[[ 76, 77, 79, ..., 41, 39, 38], [ 76, 77, 79, ..., 40, 39, 35], [ 76, 77, 79, ..., 34, 30, 26], ..., [127, 127, 127, ..., 125, 125, 125], [125, 126, 126, ..., 125, 125, 125], [122, 124, 126, ..., 125, 125, 125]]], dtype=torch.uint8), 'pts': 1800.0} ``` TODO: - [x] docs - [x] fix tests
lhoestq
https://github.com/huggingface/datasets/pull/7424
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7424", "html_url": "https://github.com/huggingface/datasets/pull/7424", "diff_url": "https://github.com/huggingface/datasets/pull/7424.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7424.patch", "merged_at": "2025-03-05T17:41:22" }
true
2,879,271,409
7,423
Row indexing a dataset with numpy integers
closed
[ "Would be cool to be consistent when it comes to indexing with numpy objects, if we do accept numpy arrays we should indeed accept numpy integers. Your idea sounds reasonable, I'd also be in favor of adding a simple test as well" ]
2025-02-25T18:44:45
2025-07-28T02:23:17
2025-07-28T02:23:17
### Feature request Allow indexing datasets with a scalar numpy integer type. ### Motivation Indexing a dataset with a scalar numpy.int* object raises a TypeError. This is due to the test in `datasets/formatting/formatting.py:key_to_query_type` ``` python def key_to_query_type(key: Union[int, slice, range, str, Iterable]) -> str: if isinstance(key, int): return "row" elif isinstance(key, str): return "column" elif isinstance(key, (slice, range, Iterable)): return "batch" _raise_bad_key_type(key) ``` In the row case, it checks if key is an int, which returns false when key is integer like but not a builtin python integer type. This is counterintuitive because a numpy array of np.int64s can be used for the batch case. For example: ``` python import numpy as np import datasets dataset = datasets.Dataset.from_dict({"a": [1, 2, 3, 4], "b": [5, 6, 7, 8]}) # Regular indexing dataset[0] dataset[:2] # Indexing with numpy data types (expect same results) idx = np.asarray([0, 1]) dataset[idx] # Succeeds when using an array of np.int64 values dataset[idx[0]] # Fails with TypeError when using scalar np.int64 ``` For the user, this can be solved by wrapping `idx[0]` in `int` but the test could also be changed in `key_to_query_type` to accept a less strict definition of int. ``` diff +import numbers + def key_to_query_type(key: Union[int, slice, range, str, Iterable]) -> str: + if isinstance(key, numbers.Integral): - if isinstance(key, int): return "row" elif isinstance(key, str): return "column" elif isinstance(key, (slice, range, Iterable)): return "batch" _raise_bad_key_type(key) ``` Looking at how others do it, pandas has an `is_integer` definition that it checks which uses `is_integer_object` defined in `pandas/_libs/utils.pxd`: ``` cython cdef inline bint is_integer_object(object obj) noexcept: """ Cython equivalent of `isinstance(val, (int, np.integer)) and not isinstance(val, (bool, np.timedelta64))` Parameters ---------- val : object Returns ------- is_integer : bool Notes ----- This counts np.timedelta64 objects as integers. """ return (not PyBool_Check(obj) and isinstance(obj, (int, cnp.integer)) and not is_timedelta64_object(obj)) ``` This would be less flexible as it explicitly checks for numpy integer, but worth noting that they had the need to ensure the key is not a bool. ### Your contribution I can submit a pull request with the above changes after checking that indexing succeeds with the numpy integer type. Or if there is a different integer check that would be preferred I could add that. If there is a reason not to want this behavior that is fine too.
DavidRConnell
https://github.com/huggingface/datasets/issues/7423
null
false
2,878,369,052
7,421
DVC integration broken
open
[ "Unfortunately `url` is a reserved argument in `fsspec.url_to_fs`, so ideally file system implementations like DVC should use another argument name to avoid this kind of errors" ]
2025-02-25T13:14:31
2025-03-03T17:42:02
null
### Describe the bug The DVC integration seems to be broken. Followed this guide: https://dvc.org/doc/user-guide/integrations/huggingface ### Steps to reproduce the bug #### Script to reproduce ~~~python from datasets import load_dataset dataset = load_dataset( "csv", data_files="dvc://workshop/satellite-data/jan_train.csv", storage_options={"url": "https://github.com/iterative/dataset-registry.git"}, ) print(dataset) ~~~ #### Error log ~~~ Traceback (most recent call last): File "C:\tmp\test\load.py", line 3, in <module> dataset = load_dataset( ^^^^^^^^^^^^^ File "C:\tmp\test\.venv\Lib\site-packages\datasets\load.py", line 2151, in load_dataset builder_instance.download_and_prepare( File "C:\tmp\test\.venv\Lib\site-packages\datasets\builder.py", line 808, in download_and_prepare fs, output_dir = url_to_fs(output_dir, **(storage_options or {})) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: url_to_fs() got multiple values for argument 'url' ~~~ ### Expected behavior Integration would work and the indicated file is downloaded and opened. ### Environment info #### Python version ~~~ python --version Python 3.11.10 ~~~ #### Venv (pip install datasets dvc): ~~~ Package Version ---------------------- ----------- aiohappyeyeballs 2.4.6 aiohttp 3.11.13 aiohttp-retry 2.9.1 aiosignal 1.3.2 amqp 5.3.1 annotated-types 0.7.0 antlr4-python3-runtime 4.9.3 appdirs 1.4.4 asyncssh 2.20.0 atpublic 5.1 attrs 25.1.0 billiard 4.2.1 celery 5.4.0 certifi 2025.1.31 cffi 1.17.1 charset-normalizer 3.4.1 click 8.1.8 click-didyoumean 0.3.1 click-plugins 1.1.1 click-repl 0.3.0 colorama 0.4.6 configobj 5.0.9 cryptography 44.0.1 datasets 3.3.2 dictdiffer 0.9.0 dill 0.3.8 diskcache 5.6.3 distro 1.9.0 dpath 2.2.0 dulwich 0.22.7 dvc 3.59.1 dvc-data 3.16.9 dvc-http 2.32.0 dvc-objects 5.1.0 dvc-render 1.0.2 dvc-studio-client 0.21.0 dvc-task 0.40.2 entrypoints 0.4 filelock 3.17.0 flatten-dict 0.4.2 flufl-lock 8.1.0 frozenlist 1.5.0 fsspec 2024.12.0 funcy 2.0 gitdb 4.0.12 gitpython 3.1.44 grandalf 0.8 gto 1.7.2 huggingface-hub 0.29.1 hydra-core 1.3.2 idna 3.10 iterative-telemetry 0.0.10 kombu 5.4.2 markdown-it-py 3.0.0 mdurl 0.1.2 multidict 6.1.0 multiprocess 0.70.16 networkx 3.4.2 numpy 2.2.3 omegaconf 2.3.0 orjson 3.10.15 packaging 24.2 pandas 2.2.3 pathspec 0.12.1 platformdirs 4.3.6 prompt-toolkit 3.0.50 propcache 0.3.0 psutil 7.0.0 pyarrow 19.0.1 pycparser 2.22 pydantic 2.10.6 pydantic-core 2.27.2 pydot 3.0.4 pygit2 1.17.0 pygments 2.19.1 pygtrie 2.5.0 pyparsing 3.2.1 python-dateutil 2.9.0.post0 pytz 2025.1 pywin32 308 pyyaml 6.0.2 requests 2.32.3 rich 13.9.4 ruamel-yaml 0.18.10 ruamel-yaml-clib 0.2.12 scmrepo 3.3.10 semver 3.0.4 setuptools 75.8.0 shellingham 1.5.4 shortuuid 1.0.13 shtab 1.7.1 six 1.17.0 smmap 5.0.2 sqltrie 0.11.2 tabulate 0.9.0 tomlkit 0.13.2 tqdm 4.67.1 typer 0.15.1 typing-extensions 4.12.2 tzdata 2025.1 urllib3 2.3.0 vine 5.1.0 voluptuous 0.15.2 wcwidth 0.2.13 xxhash 3.5.0 yarl 1.18.3 zc-lockfile 3.0.post1 ~~~
maxstrobel
https://github.com/huggingface/datasets/issues/7421
null
false
2,876,281,928
7,420
better correspondence between cached and saved datasets created using from_generator
open
[]
2025-02-24T22:14:37
2025-02-26T03:10:22
null
### Feature request At the moment `.from_generator` can only create a dataset that lives in the cache. The cached dataset cannot be loaded with `load_from_disk` because the cache folder is missing `state.json`. So the only way to convert this cached dataset to a regular is to use `save_to_disk` which needs to create a copy of the cached dataset. For large datasets this can end up wasting a lot of space. In my case the saving operation failed so I am stuck with a large cached dataset and no clear way to convert to a `Dataset` that I can use. The requested feature is to provide a way to be able to load a cached dataset using `.load_from_disk`. Alternatively `.from_generator` can create the dataset at a specified location so that it can be loaded from there with `.load_from_disk`. ### Motivation I have the following workflow which has exposed some awkwardness about the Datasets saving/caching. 1. I created a cached dataset using `.from_generator` which was cached in a folder. This dataset is rather large (~600GB) with many shards. 2. I tried to save this dataset using `.save_to_disk` to another location so that I can use later as a `Dataset`. This essentially creates another copy (for a total of 1.2TB!) of what is already in the cache... In my case the saving operation keeps dying for some reason and I am stuck with a cached dataset and no copy. 3. Now I am trying to "save" the existing cached dataset but it is not clear how to access the cached files after `.from_generator` has finished e.g. from a different process. I should not be even looking at the cache but I really do not want to waste another 2hr to generate the set so that if fails agains (I already did this couple of times). - I tried `.load_from_disk` but it does not work with cached files and complains that this is not a `Dataset` (!). - I looked at `.from_file` which takes one file but the cached file has many (shards) so I am not sure how to make this work. - I tried `.load_dataset` but this seems to either try to "download" a copy (of a file which is already in the local file system!) which I will then need to save or I need to use `streaming=False` to create an `IterableDataset `which then I need to convert (using the cache) to `Dataset` so that I can save it. With both options I will end up with 3 copies of the same dataset for a total of ~2TB! I am hoping here is another way to do this... Maybe I am missing something here: I looked at docs and forums but no luck. I have a bunch of arrow files cached by `Dataset.from_generator` and no clean way to make them into a `Dataset` that I can use. This all could be so much easer if `load_from_disk` can recognize the cached files and produce a `Dataset`: after the cache is created I would not have to "save" it again and I can just load it when I need. At the moment `load_from_disk` needs `state.json` which is lacking in the cache folder. So perhaps `.from_generator` could be made to "finalize" (e.g. create `state.json`) the dataset once it is done so that it can be loaded easily. Or provide `.from_generator` with a `save_to_dir` parameter in addition to `cache_dir` which can be used for the whole process including creating the `state.json` at the end. As a proof of concept I just created `state.json` by hand and `load_from_disk` worked using the cache! So it seems to be the missing piece here. ### Your contribution Time permitting I can look into `.from_generator` to see if adding `state.json` is feasible.
vttrifonov
https://github.com/huggingface/datasets/issues/7420
null
false
2,875,635,320
7,419
Import order crashes script execution
open
[]
2025-02-24T17:03:43
2025-02-24T17:03:43
null
### Describe the bug Hello, I'm trying to convert an HF dataset into a TFRecord so I'm importing `tensorflow` and `datasets` to do so. Depending in what order I'm importing those librairies, my code hangs forever and is unkillable (CTRL+C doesn't work, I need to kill my shell entirely). Thank you for your help 🙏 ### Steps to reproduce the bug If you run the following script, this will hang forever : ```python import tensorflow as tf import datasets dataset = datasets.load_dataset("imagenet-1k", split="validation", streaming=True) print(next(iter(dataset))) ``` however running the following will work fine (I just changed the order of the imports) : ```python import datasets import tensorflow as tf dataset = datasets.load_dataset("imagenet-1k", split="validation", streaming=True) print(next(iter(dataset))) ``` ### Expected behavior I'm expecting the script to reach the end and my case print the content of the first item in the dataset ``` {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=408x500 at 0x70C646A03110>, 'label': 91} ``` ### Environment info ``` $ datasets-cli env - `datasets` version: 3.3.2 - Platform: Linux-6.8.0-1017-aws-x86_64-with-glibc2.35 - Python version: 3.11.7 - `huggingface_hub` version: 0.29.1 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0 ``` I'm also using `tensorflow==2.18.0`.
DamienMatias
https://github.com/huggingface/datasets/issues/7419
null
false
2,868,701,471
7,418
pyarrow.lib.arrowinvalid: cannot mix list and non-list, non-null values with map function
open
[ "@lhoestq ", "Can you try passing text: None for the image object ? Pyarrow expects all the objects to have the exact same type, in particular the dicttionaries in \"content\" should all have the keys \"type\" and \"text\"", "The following modification on system prompt works, but it is different from the usual ...
2025-02-21T10:58:06
2025-07-11T13:06:10
null
### Describe the bug Encounter pyarrow.lib.arrowinvalid error with map function in some example when loading the dataset ### Steps to reproduce the bug ``` from datasets import load_dataset from PIL import Image, PngImagePlugin dataset = load_dataset("leonardPKU/GEOQA_R1V_Train_8K") system_prompt="You are a helpful AI Assistant" def make_conversation(example): prompt = [] prompt.append({"role": "system", "content": system_prompt}) prompt.append( { "role": "user", "content": [ {"type": "image"}, {"type": "text", "text": example["problem"]}, ] } ) return {"prompt": prompt} def check_data_types(example): for key, value in example.items(): if key == 'image': if not isinstance(value, PngImagePlugin.PngImageFile): print(value) if key == "problem" or key == "solution": if not isinstance(value, str): print(value) return example dataset = dataset.map(check_data_types) dataset = dataset.map(make_conversation) ``` ### Expected behavior Successfully process the dataset with map ### Environment info datasets==3.3.1
alexxchen
https://github.com/huggingface/datasets/issues/7418
null
false
2,866,868,922
7,417
set dev version
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7417). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-02-20T17:45:29
2025-02-20T17:47:50
2025-02-20T17:45:36
null
lhoestq
https://github.com/huggingface/datasets/pull/7417
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7417", "html_url": "https://github.com/huggingface/datasets/pull/7417", "diff_url": "https://github.com/huggingface/datasets/pull/7417.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7417.patch", "merged_at": "2025-02-20T17:45:36" }
true
2,866,862,143
7,416
Release: 3.3.2
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7416). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-02-20T17:42:11
2025-02-20T17:44:35
2025-02-20T17:43:28
null
lhoestq
https://github.com/huggingface/datasets/pull/7416
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7416", "html_url": "https://github.com/huggingface/datasets/pull/7416", "diff_url": "https://github.com/huggingface/datasets/pull/7416.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7416.patch", "merged_at": "2025-02-20T17:43:28" }
true
2,865,774,546
7,415
Shard Dataset at specific indices
open
[ "Hi ! if it's an option I'd suggest to have one sequence per row instead.\n\nOtherwise you'd have to make your own save/load mechanism", "Saving one sequence per row is very difficult and heavy and makes all the optimizations pointless. How would a custom save/load mechanism look like?", "You can use `pyarrow` ...
2025-02-20T10:43:10
2025-02-24T11:06:45
null
I have a dataset of sequences, where each example in the sequence is a separate row in the dataset (similar to LeRobotDataset). When running `Dataset.save_to_disk` how can I provide indices where it's possible to shard the dataset such that no episode spans more than 1 shard. Consequently, when I run `Dataset.load_from_disk`, how can I load just a subset of the shards to save memory and time on different ranks? I guess an alternative to this would be, given a loaded `Dataset`, how can I run `Dataset.shard` such that sharding doesn't split any episode across shards?
nikonikolov
https://github.com/huggingface/datasets/issues/7415
null
false
2,863,798,756
7,414
Gracefully cancel async tasks
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7414). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-02-19T16:10:58
2025-02-20T14:12:26
2025-02-20T14:12:23
null
lhoestq
https://github.com/huggingface/datasets/pull/7414
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7414", "html_url": "https://github.com/huggingface/datasets/pull/7414", "diff_url": "https://github.com/huggingface/datasets/pull/7414.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7414.patch", "merged_at": "2025-02-20T14:12:23" }
true
2,860,947,582
7,413
Documentation on multiple media files of the same type with WebDataset
open
[ "Yes this is correct and it works with huggingface datasets as well ! Feel free to include an example here: https://github.com/huggingface/datasets/blob/main/docs/source/video_dataset.mdx" ]
2025-02-18T16:13:20
2025-02-20T14:17:54
null
The [current documentation](https://huggingface.co/docs/datasets/en/video_dataset) on a creating a video dataset includes only examples with one media file and one json. It would be useful to have examples where multiple files of the same type are included. For example, in a sign language dataset, you may have a base video and a video annotation of the extracted pose. According to the WebDataset documentation, this should be able to be done with period separated filenames. For example: ```e39871fd9fd74f55.base.mp4 e39871fd9fd74f55.pose.mp4 e39871fd9fd74f55.json f18b91585c4d3f3e.base.mp4 f18b91585c4d3f3e.pose.mp4 f18b91585c4d3f3e.json ... ``` If you can confirm that this method of including multiple media files works with huggingface datasets and include an example in the documentation, I'd appreciate it.
DCNemesis
https://github.com/huggingface/datasets/issues/7413
null
false
2,859,433,710
7,412
Index Error Invalid Ket is out of bounds for size 0 for code-search-net/code_search_net dataset
open
[]
2025-02-18T05:58:33
2025-02-18T06:42:07
null
### Describe the bug I am trying to do model pruning on sentence-transformers/all-mini-L6-v2 for the code-search-net/code_search_net dataset using INCTrainer class However I am getting below error ``` raise IndexError(f"Invalid Key: {key is our of bounds for size {size}") IndexError: Invalid key: 1840208 is out of bounds for size 0 ``` ### Steps to reproduce the bug Model pruning on the above dataset using the below guide https://huggingface.co/docs/optimum/en/intel/neural_compressor/optimization#pruning ### Expected behavior The modsl should be successfully pruned ### Environment info Torch version: 2.4.1 Python version: 3.8.10
harshakhmk
https://github.com/huggingface/datasets/issues/7412
null
false
2,858,993,390
7,411
Attempt to fix multiprocessing hang by closing and joining the pool before termination
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7411). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thanks for the fix! We have been affected by this a lot when we try to use LLM Foundry ...
2025-02-17T23:58:03
2025-02-19T21:11:24
2025-02-19T13:40:32
https://github.com/huggingface/datasets/issues/6393 has plagued me on and off for a very long time. I have had various workarounds (one time combining two filter calls into one filter call removed the issue, another time making rank 0 go first resolved a cache race condition, one time i think upgrading the version of something resolved it). I don't know hf datasets well enough to fully understand the root cause, but I _think_ this PR fixes it. Evidence: I have an LLM Foundry training yaml/script (datasets version 3.2.0) that results in a hang ~1/10 times (for a baseline for this testing, it was 2/36 runs that hung). I also reran with the latest datasets version (3.3.1) and got 4/36 hung. Installing datasets from this PR, I was able to successful run the script 144 times without a hang occurring. Assuming the base probability is 1/10, this should be more than enough times to have confidence it works. After adding some logging, I could see that the code hung during the __exit__ of the mp pool context manager, after all shards had been processed, and the tqdm context manager had exited. My best explanation: When multiprocessing pool __exit__ is called, it calls pool.terminate, which forcefully exits all the processes (and calls code related to this that I haven't looked at closely). I'm guessing this forceful termination has a bad interaction with some multithreading/multiprocessing that hf datasets does. If we instead call pool.close and pool.join before the pool.terminate happens, perhaps whatever that bad interaction is is able to complete gracefully, and then terminate call proceeds without issue. If this PR seems good to you, I'd be very appreciative if you were able to do a patch release including it. Thank you! @lhoestq
dakinggg
https://github.com/huggingface/datasets/pull/7411
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7411", "html_url": "https://github.com/huggingface/datasets/pull/7411", "diff_url": "https://github.com/huggingface/datasets/pull/7411.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7411.patch", "merged_at": "2025-02-19T13:40:32" }
true
2,858,085,707
7,410
Set dev version
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7410). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-02-17T14:54:39
2025-02-17T14:56:58
2025-02-17T14:54:56
null
lhoestq
https://github.com/huggingface/datasets/pull/7410
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7410", "html_url": "https://github.com/huggingface/datasets/pull/7410", "diff_url": "https://github.com/huggingface/datasets/pull/7410.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7410.patch", "merged_at": "2025-02-17T14:54:56" }
true
2,858,079,508
7,409
Release: 3.3.1
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7409). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-02-17T14:52:12
2025-02-17T14:54:32
2025-02-17T14:53:13
null
lhoestq
https://github.com/huggingface/datasets/pull/7409
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7409", "html_url": "https://github.com/huggingface/datasets/pull/7409", "diff_url": "https://github.com/huggingface/datasets/pull/7409.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7409.patch", "merged_at": "2025-02-17T14:53:13" }
true
2,858,012,313
7,408
Fix filter speed regression
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7408). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-02-17T14:25:32
2025-02-17T14:28:48
2025-02-17T14:28:46
close https://github.com/huggingface/datasets/issues/7404
lhoestq
https://github.com/huggingface/datasets/pull/7408
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7408", "html_url": "https://github.com/huggingface/datasets/pull/7408", "diff_url": "https://github.com/huggingface/datasets/pull/7408.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7408.patch", "merged_at": "2025-02-17T14:28:46" }
true
2,856,517,442
7,407
Update use_with_pandas.mdx: to_pandas() correction in last section
closed
[]
2025-02-17T01:53:31
2025-02-20T17:28:04
2025-02-20T17:28:04
last section ``to_pandas()"
ibarrien
https://github.com/huggingface/datasets/pull/7407
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7407", "html_url": "https://github.com/huggingface/datasets/pull/7407", "diff_url": "https://github.com/huggingface/datasets/pull/7407.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7407.patch", "merged_at": "2025-02-20T17:28:04" }
true
2,856,441,206
7,406
Adding Core Maintainer List to CONTRIBUTING.md
closed
[ "@lhoestq", "there is no per-module maintainer and the list is me alone nowadays ^^'", "@lhoestq \nOh... I feel for you. \nWhat are your criteria for choosing a core maintainer? \nIt seems like it's too much work for you to manage all this code by yourself.\n\nAlso, if you don't mind, can you check this PR for ...
2025-02-17T00:32:40
2025-03-24T10:57:54
2025-03-24T10:57:54
### Feature request I propose adding a core maintainer list to the `CONTRIBUTING.md` file. ### Motivation The Transformers and Liger-Kernel projects maintain lists of core maintainers for each module. However, the Datasets project doesn't have such a list. ### Your contribution I have nothing to add here.
jp1924
https://github.com/huggingface/datasets/issues/7406
null
false
2,856,372,814
7,405
Lazy loading of environment variables
open
[ "Many python packages out there, including `huggingface_hub`, do load the environment variables on import.\nYou should `load_dotenv()` before importing the libraries.\n\nFor example you can move all you imports inside your `main()` function" ]
2025-02-16T22:31:41
2025-02-17T15:17:18
null
### Describe the bug Loading a `.env` file after an `import datasets` call does not correctly use the environment variables. This is due the fact that environment variables are read at import time: https://github.com/huggingface/datasets/blob/de062f0552a810c52077543c1169c38c1f0c53fc/src/datasets/config.py#L155C1-L155C80 ### Steps to reproduce the bug ```bash # make tmp dir mkdir -p /tmp/debug-env # make .env file echo HF_HOME=/tmp/debug-env/data > /tmp/debug-env/.env # first load dotenv, downloads to /tmp/debug-env/data uv run --with datasets,python-dotenv python3 -c \ 'import dotenv; dotenv.load_dotenv("/tmp/debug-env/.env"); import datasets; datasets.load_dataset("Anthropic/hh-rlhf")' # first import datasets, downloads to `~/.cache/huggingface` uv run --with datasets,python-dotenv python3 -c \ 'import datasets; import dotenv; dotenv.load_dotenv("/tmp/debug-env/.env"); datasets.load_dataset("Anthropic/hh-rlhf")' ``` ### Expected behavior I expect that setting environment variables with something like this: ```python3 if __name__ == "__main__": load_dotenv() main() ``` works correctly. ### Environment info "datasets>=3.3.0",
nikvaessen
https://github.com/huggingface/datasets/issues/7405
null
false
2,856,366,207
7,404
Performance regression in `dataset.filter`
closed
[ "Thanks for reporting, I'll fix the regression today", "I just released `datasets` 3.3.1 with a fix, let me know if it's good now :)", "@lhoestq it fixed the issue.\n\nThis was (very) fast, thank you very much!" ]
2025-02-16T22:19:14
2025-02-17T17:46:06
2025-02-17T14:28:48
### Describe the bug We're filtering dataset of ~1M (small-ish) records. At some point in the code we do `dataset.filter`, before (including 3.2.0) it was taking couple of seconds, and now it takes 4 hours. We use 16 threads/workers, and stack trace at them look as follows: ``` Traceback (most recent call last): File "/python/lib/python3.12/site-packages/multiprocess/process.py", line 314, in _bootstrap self.run() File "/python/lib/python3.12/site-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/python/lib/python3.12/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) ^^^^^^^^^^^^^^^^^^^ File "/python/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 678, in _write_generator_to_queue for i, result in enumerate(func(**kwargs)): File "/python/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3511, in _map_single for i, batch in iter_outputs(shard_iterable): File "/python/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3461, in iter_outputs yield i, apply_function(example, i, offset=offset) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/python/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3390, in apply_function processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/python/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 6416, in get_indices_from_mask_function indices_array = indices_mapping.column(0).take(indices_array) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/table.pxi", line 1079, in pyarrow.lib.ChunkedArray.take File "/python/lib/python3.12/site-packages/pyarrow/compute.py", line 458, in take def take(data, indices, *, boundscheck=True, memory_pool=None): ``` ### Steps to reproduce the bug 1. Save dataset of 1M records in arrow 2. Filter it with 16 threads 3. Watch it take too long ### Expected behavior Filtering done fast ### Environment info datasets 3.3.0, python 3.12
ttim
https://github.com/huggingface/datasets/issues/7404
null
false
2,855,880,858
7,402
Fix a typo in arrow_dataset.py
closed
[]
2025-02-16T04:52:02
2025-02-20T17:29:28
2025-02-20T17:29:28
"in the feature" should be "in the future"
jingedawang
https://github.com/huggingface/datasets/pull/7402
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7402", "html_url": "https://github.com/huggingface/datasets/pull/7402", "diff_url": "https://github.com/huggingface/datasets/pull/7402.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7402.patch", "merged_at": "2025-02-20T17:29:28" }
true
2,853,260,869
7,401
set dev version
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7401). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-02-14T10:17:03
2025-02-14T10:19:20
2025-02-14T10:17:13
null
lhoestq
https://github.com/huggingface/datasets/pull/7401
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7401", "html_url": "https://github.com/huggingface/datasets/pull/7401", "diff_url": "https://github.com/huggingface/datasets/pull/7401.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7401.patch", "merged_at": "2025-02-14T10:17:13" }
true
2,853,098,442
7,399
Synchronize parameters for various datasets
open
[ "Hi ! the `desc` parameter is only available for Dataset / DatasetDict for the progress bar of `map()``\n\nSince IterableDataset only runs the map functions when you iterate over the dataset, there is no progress bar and `desc` is useless. We could still add the argument for parity but it wouldn't be used for anyth...
2025-02-14T09:15:11
2025-02-19T11:50:29
null
### Describe the bug [IterableDatasetDict](https://huggingface.co/docs/datasets/v3.2.0/en/package_reference/main_classes#datasets.IterableDatasetDict.map) map function is missing the `desc` parameter. You can see the equivalent map function for [Dataset here](https://huggingface.co/docs/datasets/v3.2.0/en/package_reference/main_classes#datasets.Dataset.map). There might be other parameters missing - I haven't checked. ### Steps to reproduce the bug from datasets import Dataset, IterableDataset, IterableDatasetDict ds = IterableDatasetDict({"train": Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=3), "validate": Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=3)}) for d in ds["train"]: print(d) ds = ds.map(lambda x: {k: v+1 for k, v in x.items()}, desc="increment") for d in ds["train"]: print(d) ### Expected behavior The description parameter should be available for all datasets (or none). ### Environment info - `datasets` version: 3.2.0 - Platform: Linux-6.1.85+-x86_64-with-glibc2.35 - Python version: 3.11.11 - `huggingface_hub` version: 0.28.1 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.9.0
grofte
https://github.com/huggingface/datasets/issues/7399
null
false
2,853,097,869
7,398
Release: 3.3.0
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7398). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-02-14T09:15:03
2025-02-14T09:57:39
2025-02-14T09:57:37
null
lhoestq
https://github.com/huggingface/datasets/pull/7398
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7398", "html_url": "https://github.com/huggingface/datasets/pull/7398", "diff_url": "https://github.com/huggingface/datasets/pull/7398.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7398.patch", "merged_at": "2025-02-14T09:57:37" }
true
2,852,829,763
7,397
Kannada dataset(Conversations, Wikipedia etc)
closed
[ "Hi ! feel free to uplad the CSV on https://huggingface.co/datasets :)\r\n\r\nwe don't store the datasets' data in this github repository" ]
2025-02-14T06:53:03
2025-02-20T17:28:54
2025-02-20T17:28:53
null
Likhith2612
https://github.com/huggingface/datasets/pull/7397
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7397", "html_url": "https://github.com/huggingface/datasets/pull/7397", "diff_url": "https://github.com/huggingface/datasets/pull/7397.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7397.patch", "merged_at": null }
true
2,853,201,277
7,400
504 Gateway Timeout when uploading large dataset to Hugging Face Hub
open
[ "I transferred to the `datasets` repository. Is there any retry mechanism in `datasets` @lhoestq ?\n\nAnother solution @hotchpotch if you want to get your dataset pushed to the Hub in a robust way is to save it to a local folder first and then use `huggingface-cli upload-large-folder` (see https://huggingface.co/do...
2025-02-14T02:18:35
2025-02-14T23:48:36
null
### Description I encountered consistent 504 Gateway Timeout errors while attempting to upload a large dataset (approximately 500GB) to the Hugging Face Hub. The upload fails during the process with a Gateway Timeout error. I will continue trying to upload. While it might succeed in future attempts, I wanted to report this issue in the meantime. ### Reproduction - I attempted the upload 3 times - Each attempt resulted in the same 504 error during the upload process (not at the start, but in the middle of the upload) - Using `dataset.push_to_hub()` method ### Environment Information ``` - huggingface_hub version: 0.28.0 - Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39 - Python version: 3.11.10 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Running in Google Colab Enterprise ?: No - Token path ?: /home/hotchpotch/.cache/huggingface/token - Has saved token ?: True - Who am I ?: hotchpotch - Configured git credential helpers: store - FastAI: N/A - Tensorflow: N/A - Torch: 2.5.1 - Jinja2: 3.1.5 - Graphviz: N/A - keras: N/A - Pydot: N/A - Pillow: 10.4.0 - hf_transfer: N/A - gradio: N/A - tensorboard: N/A - numpy: 1.26.4 - pydantic: 2.10.6 - aiohttp: 3.11.11 - ENDPOINT: https://huggingface.co - HF_HUB_CACHE: /home/hotchpotch/.cache/huggingface/hub - HF_ASSETS_CACHE: /home/hotchpotch/.cache/huggingface/assets - HF_TOKEN_PATH: /home/hotchpotch/.cache/huggingface/token - HF_STORED_TOKENS_PATH: /home/hotchpotch/.cache/huggingface/stored_tokens - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False - HF_HUB_ETAG_TIMEOUT: 10 - HF_HUB_DOWNLOAD_TIMEOUT: 10 ``` ### Full Error Traceback ```python Traceback (most recent call last): File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status response.raise_for_status() File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/requests/models.py", line 1024, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/hotchpotch/fineweb-2-edu-japanese.git/info/lfs/objects/batch The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/create_edu_japanese_ds/upload_edu_japanese_ds.py", line 12, in <module> ds.push_to_hub("hotchpotch/fineweb-2-edu-japanese", private=True) File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/datasets/dataset_dict.py", line 1665, in push_to_hub split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 5301, in _push_parquet_shards_to_hub api.preupload_lfs_files( File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 4215, in preupload_lfs_files _upload_lfs_files( File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/_commit_api.py", line 395, in _upload_lfs_files batch_actions_chunk, batch_errors_chunk = post_lfs_batch_info( ^^^^^^^^^^^^^^^^^^^^ File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/lfs.py", line 168, in post_lfs_batch_info hf_raise_for_status(resp) File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status raise _format(HfHubHTTPError, str(e), response) from e huggingface_hub.errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/hotchpotch/fineweb-2-edu-japanese.git/info/lfs/objects/batch ```
hotchpotch
https://github.com/huggingface/datasets/issues/7400
null
false
2,851,716,755
7,396
Update README.md
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7396). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-02-13T17:44:36
2025-02-13T17:46:57
2025-02-13T17:44:51
null
lhoestq
https://github.com/huggingface/datasets/pull/7396
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7396", "html_url": "https://github.com/huggingface/datasets/pull/7396", "diff_url": "https://github.com/huggingface/datasets/pull/7396.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7396.patch", "merged_at": "2025-02-13T17:44:51" }
true
2,851,575,160
7,395
Update docs
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7395). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-02-13T16:43:15
2025-02-13T17:20:32
2025-02-13T17:20:30
- update min python version - replace canonical dataset names with new names - avoid examples with trust_remote_code
lhoestq
https://github.com/huggingface/datasets/pull/7395
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7395", "html_url": "https://github.com/huggingface/datasets/pull/7395", "diff_url": "https://github.com/huggingface/datasets/pull/7395.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7395.patch", "merged_at": "2025-02-13T17:20:29" }
true
2,847,172,115
7,394
Using load_dataset with data_files and split arguments yields an error
open
[]
2025-02-12T04:50:11
2025-02-12T04:50:11
null
### Describe the bug It seems the list of valid splits recorded by the package becomes incorrectly overwritten when using the `data_files` argument. If I run ```python from datasets import load_dataset load_dataset("allenai/super", split="all_examples", data_files="tasks/expert.jsonl") ``` then I get the error ``` ValueError: Unknown split "all_examples". Should be one of ['train']. ``` However, if I run ```python from datasets import load_dataset load_dataset("allenai/super", split="train", name="Expert") ``` then I get ``` ValueError: Unknown split "train". Should be one of ['all_examples']. ``` ### Steps to reproduce the bug Run ```python from datasets import load_dataset load_dataset("allenai/super", split="all_examples", data_files="tasks/expert.jsonl") ``` ### Expected behavior No error. ### Environment info Python = 3.12 datasets = 3.2.0
devon-research
https://github.com/huggingface/datasets/issues/7394
null
false
2,846,446,674
7,393
Optimized sequence encoding for scalars
closed
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7393). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2025-02-11T20:30:44
2025-02-13T17:11:33
2025-02-13T17:11:32
The change in https://github.com/huggingface/datasets/pull/3197 introduced redundant list-comprehensions when `obj` is a long sequence of scalars. This becomes a noticeable overhead when loading data from an `IterableDataset` in the function `_apply_feature_types_on_example` and can be eliminated by adding a check for scalars in `encode_nested_example` proposed here. In the following code example ``` import time from datasets.features import Sequence, Value from datasets.features.features import encode_nested_example schema = Sequence(Value("int32")) obj = list(range(100000)) start = time.perf_counter() result = encode_nested_example(schema, obj) stop = time.perf_counter() print(f"Time spent is {stop-start} sec") ``` `encode_nested_example` becomes 492x faster (from 0.0769 to 0.0002 sec), respectively 322x (from 0.00814 to 0.00003 sec) for a list of length 10000, on a GH200 system, making it unnoticeable when loading data with tokenization. Another change is made to avoid creating arrays from scalars and afterwards re-extracting them during casting to python (`obj == obj.__array__()[()]` in that case), which avoids a regression in the array write benchmarks.
lukasgd
https://github.com/huggingface/datasets/pull/7393
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7393", "html_url": "https://github.com/huggingface/datasets/pull/7393", "diff_url": "https://github.com/huggingface/datasets/pull/7393.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7393.patch", "merged_at": "2025-02-13T17:11:32" }
true
2,846,095,043
7,392
push_to_hub payload too large error when using large ClassLabel feature
open
[ "See also <https://discuss.huggingface.co/t/datasetdict-push-to-hub-failing-with-payload-to-large/140083/8>\n" ]
2025-02-11T17:51:34
2025-02-11T18:01:31
null
### Describe the bug When using `datasets.DatasetDict.push_to_hub` an `HfHubHTTPError: 413 Client Error: Payload Too Large for url` is raised if the dataset contains a large `ClassLabel` feature. Even if the total size of the dataset is small. ### Steps to reproduce the bug ``` python import random import sys import datasets random.seed(42) def random_str(sz): return "".join(chr(random.randint(ord("a"), ord("z"))) for _ in range(sz)) data = datasets.DatasetDict( { str(i): datasets.Dataset.from_dict( { "label": [list(range(3)) for _ in range(10)], "abstract": [random_str(10_000) for _ in range(10)], }, ) for i in range(3) } ) features = data["1"].features.copy() features["label"] = datasets.Sequence( datasets.ClassLabel(names=[str(i) for i in range(50_000)]) ) data = data.map(lambda examples: {}, features=features) feat_size = sys.getsizeof(data["1"].features["label"].feature.names) print(f"Size of ClassLabel names: {feat_size}") # Size of ClassLabel names: 444376 data.push_to_hub("dconnell/pubtator3_test") ``` Note that this succeeds if `ClassLabel` has fewer names or if `ClassLabel` is replaced with `Value("int64")` ### Expected behavior Should push the dataset to hub. ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 3.2.0 - Platform: Linux-5.15.0-126-generic-x86_64-with-glibc2.35 - Python version: 3.12.8 - `huggingface_hub` version: 0.28.1 - PyArrow version: 19.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
DavidRConnell
https://github.com/huggingface/datasets/issues/7392
null
false
2,845,184,764
7,391
AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType'
open
[]
2025-02-11T12:02:26
2025-02-11T12:02:26
null
pyarrow 尝试了若干个版本都不可以
LinXin04
https://github.com/huggingface/datasets/issues/7391
null
false
2,843,813,365
7,390
Re-add py.typed
open
[ "A similar issue was fixed for the `transformers` package, too: https://github.com/huggingface/transformers/pull/37022" ]
2025-02-10T22:12:52
2025-08-10T00:51:17
null
### Feature request The motivation for removing py.typed no longer seems to apply. Would a solution like [this one](https://github.com/huggingface/huggingface_hub/pull/2752) work here? ### Motivation MyPy support is broken. As more type checkers come out, such as RedKnot, these may also be broken. It would be good to be PEP 561 compliant as long as it's not too onerous. ### Your contribution I can re-add py.typed, but I don't know how to make sur all of the `__all__` files are provided (although you may not need to with modern PyRight).
NeilGirdhar
https://github.com/huggingface/datasets/issues/7390
null
false