id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
body
stringlengths
0
228k
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
1,123,192,866
3,677
Discovery cannot be streamed anymore
closed
[ "Seems like a regression from https://github.com/huggingface/datasets/pull/2843\r\n\r\nOr maybe it's an issue with the hosting. I don't think so, though, because https://www.dropbox.com/s/aox84z90nyyuikz/discovery.zip seems to work as expected\r\n\r\n", "Hi @severo, thanks for reporting.\r\n\r\nSome servers do no...
2022-02-03T15:02:03
2022-02-10T16:51:24
2022-02-10T16:51:24
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset iterable_dataset = load_dataset("discovery", name="discovery", split="train", streaming=True) list(iterable_dataset.take(1)) ``` ## Expected results The first ...
severo
https://github.com/huggingface/datasets/issues/3677
null
false
1,123,096,362
3,676
`None` replaced by `[]` after first batch in map
closed
[ "It looks like this is because of this behavior in pyarrow:\r\n```python\r\nimport pyarrow as pa\r\n\r\narr = pa.array([None, [0]])\r\nreconstructed_arr = pa.ListArray.from_arrays(arr.offsets, arr.values)\r\nprint(reconstructed_arr.to_pylist())\r\n# [[], [0]]\r\n```\r\n\r\nIt seems that `arr.offsets` can reconstruc...
2022-02-03T13:36:48
2022-10-28T13:13:20
2022-10-28T13:13:20
Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # b # 0 [None, [0]] # 1 [[], [0]] # ...
lhoestq
https://github.com/huggingface/datasets/issues/3676
null
false
1,123,078,408
3,675
Add CodeContests dataset
closed
[ "@mariosasko Can I take this up?", "This dataset is now available here: https://huggingface.co/datasets/deepmind/code_contests." ]
2022-02-03T13:20:00
2022-07-20T11:07:05
2022-07-20T11:07:05
## Adding a Dataset - **Name:** CodeContests - **Description:** CodeContests is a competitive programming dataset for machine-learning. - **Paper:** - **Data:** https://github.com/deepmind/code_contests - **Motivation:** This dataset was used when training [AlphaCode](https://deepmind.com/blog/article/Competitive-...
mariosasko
https://github.com/huggingface/datasets/issues/3675
null
false
1,123,027,874
3,674
Add FrugalScore metric
closed
[ "@lhoestq \r\n\r\nThe model used by default (`moussaKam/frugalscore_tiny_bert-base_bert-score`) is a tiny model.\r\n\r\nI still want to make one modification before merging.\r\nI would like to load the model checkpoint once. Do you think it's a good idea if I load it in `_download_and_prepare`? In this case should ...
2022-02-03T12:28:52
2022-02-21T15:58:44
2022-02-21T15:58:44
This pull request add FrugalScore metric for NLG systems evaluation. FrugalScore is a reference-based metric for NLG models evaluation. It is based on a distillation approach that allows to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance. Paper: https:...
moussaKam
https://github.com/huggingface/datasets/pull/3674
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3674", "html_url": "https://github.com/huggingface/datasets/pull/3674", "diff_url": "https://github.com/huggingface/datasets/pull/3674.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3674.patch", "merged_at": "2022-02-21T15:58...
true
1,123,010,520
3,673
`load_dataset("snli")` is different from dataset viewer
closed
[ "Yes, we decided to replace the encoded label with the corresponding label when possible in the dataset viewer. But\r\n1. maybe it's the wrong default\r\n2. we could find a way to show both (with a switch, or showing both ie. `0 (neutral)`).\r\n", "Hi @severo,\r\n\r\nThanks for clarifying. \r\n\r\nI think this de...
2022-02-03T12:10:43
2022-02-16T11:22:31
2022-02-11T17:01:21
## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded dataset shows the encoded labels (i.e., 0, 1, 2). Is t...
pietrolesci
https://github.com/huggingface/datasets/issues/3673
null
false
1,122,980,556
3,672
Prioritize `module.builder_kwargs` over defaults in `TestCommand`
closed
[]
2022-02-03T11:38:42
2022-02-04T12:37:20
2022-02-04T12:37:19
This fixes a bug in the `TestCommand` where multiple kwargs for `name` were passed if it was set in both default and `module.builder_kwargs`. Example error: ```Python Traceback (most recent call last): File "create_metadata.py", line 96, in <module> main(**vars(args)) File "create_metadata.py", line 86, ...
lvwerra
https://github.com/huggingface/datasets/pull/3672
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3672", "html_url": "https://github.com/huggingface/datasets/pull/3672", "diff_url": "https://github.com/huggingface/datasets/pull/3672.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3672.patch", "merged_at": "2022-02-04T12:37...
true
1,122,864,253
3,671
Give an estimate of the dataset size in DatasetInfo
open
[]
2022-02-03T09:47:10
2022-02-03T09:47:10
null
**Is your feature request related to a problem? Please describe.** Currently, only part of the datasets provide `dataset_size`, `download_size`, `size_in_bytes` (and `num_bytes` and `num_examples` inside `splits`). I would want to get this information, or an estimation, for all the datasets. **Describe the soluti...
severo
https://github.com/huggingface/datasets/issues/3671
null
false
1,122,439,827
3,670
feat: 🎸 generate info if dataset_infos.json does not exist
closed
[ "It's a first attempt at solving https://github.com/huggingface/datasets/issues/3013.", "I only kept these ones:\r\n```\r\n path: str,\r\n data_files: Optional[Union[Dict, List, str]] = None,\r\n download_config: Optional[DownloadConfig] = None,\r\n download_mode: Optional[GenerateMode] = None,\r\n ...
2022-02-02T22:11:56
2022-02-21T15:57:11
2022-02-21T15:57:10
in get_dataset_infos(). Also: add the `use_auth_token` parameter, and create get_dataset_config_info() ✅ Closes: #3013
severo
https://github.com/huggingface/datasets/pull/3670
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3670", "html_url": "https://github.com/huggingface/datasets/pull/3670", "diff_url": "https://github.com/huggingface/datasets/pull/3670.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3670.patch", "merged_at": "2022-02-21T15:57...
true
1,122,335,622
3,669
Common voice validated partition
closed
[ "Hi @patrickvonplaten - could you please advise whether this would be a welcomed change, and if so, who I consult regarding the unit-tests?", "I'd be happy with adding this change. @anton-l @lhoestq - what do you think?", "Cool ! I just fixed the tests by adding a dummy `validated.tsv` file in the dummy data ar...
2022-02-02T20:04:43
2022-02-08T17:26:52
2022-02-08T17:23:12
This patch adds access to the 'validated' partitions of CommonVoice datasets (provided by the dataset creators but not available in the HuggingFace interface yet). As 'validated' contains significantly more data than 'train' (although it contains both test and validation, so one needs to be careful there), it can be u...
shalymin-amzn
https://github.com/huggingface/datasets/pull/3669
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3669", "html_url": "https://github.com/huggingface/datasets/pull/3669", "diff_url": "https://github.com/huggingface/datasets/pull/3669.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3669.patch", "merged_at": "2022-02-08T17:23...
true
1,122,261,736
3,668
Couldn't cast array of type string error with cast_column
closed
[ "Hi ! I wasn't able to reproduce the error, are you still experiencing this ? I tried calling `cast_column` on a string column containing paths.\r\n\r\nIf you manage to share a reproducible code example that would be perfect", "Hi,\r\n\r\nI think my team mate got this solved. Clolsing it for now and will reopen i...
2022-02-02T18:33:29
2022-07-19T13:36:24
2022-07-19T13:36:24
## Describe the bug In OVH cloud during Huggingface Robust-speech-recognition event on a AI training notebook instance using jupyter lab and running jupyter notebook When using the dataset.cast_column("audio",Audio(sampling_rate=16_000)) method I get error ![image](https://user-images.githubusercontent.com/25264...
R4ZZ3
https://github.com/huggingface/datasets/issues/3668
null
false
1,122,060,630
3,667
Process .opus files with torchaudio
closed
[ "Note that torchaudio is maybe less practical to use for TF or JAX users.\r\nThis is not in the scope of this PR, but in the future if we manage to find a way to let the user control the decoding it would be nice", "> Note that torchaudio is maybe less practical to use for TF or JAX users. This is not in the scop...
2022-02-02T15:23:14
2022-02-04T15:29:38
2022-02-04T15:29:38
@anton-l suggested to proccess .opus files with `torchaudio` instead of `soundfile` as it's faster: ![opus](https://user-images.githubusercontent.com/16348744/152177816-2df6076c-f28b-4aef-a08d-b499b921414d.png) (moreover, I didn't manage to load .opus files with `soundfile` / `librosa` locally on any my machine an...
polinaeterna
https://github.com/huggingface/datasets/pull/3667
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3667", "html_url": "https://github.com/huggingface/datasets/pull/3667", "diff_url": "https://github.com/huggingface/datasets/pull/3667.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3667.patch", "merged_at": null }
true
1,122,058,894
3,666
process .opus files (for Multilingual Spoken Words)
closed
[ "@lhoestq I still have problems with processing `.opus` files with `soundfile` so I actually cannot fully check that it works but it should... Maybe this should be investigated in case of someone else would also have problems with that.\r\n\r\nAlso, as the data is in a private repo on the hub (before we come to a ...
2022-02-02T15:21:48
2022-02-22T10:04:03
2022-02-22T10:03:53
Opus files requires `libsndfile>=1.0.30`. Add check for this version and tests. **outdated:** Add [Multillingual Spoken Words dataset](https://mlcommons.org/en/multilingual-spoken-words/) You can specify multiple languages for downloading 😌: ```python ds = load_dataset("datasets/ml_spoken_words", languages=...
polinaeterna
https://github.com/huggingface/datasets/pull/3666
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3666", "html_url": "https://github.com/huggingface/datasets/pull/3666", "diff_url": "https://github.com/huggingface/datasets/pull/3666.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3666.patch", "merged_at": "2022-02-22T10:03...
true
1,121,753,385
3,665
Fix MP3 resampling when a dataset's audio files have different sampling rates
closed
[]
2022-02-02T10:31:45
2022-02-02T10:52:26
2022-02-02T10:52:26
The resampler needs to be updated if the `orig_freq` doesn't match the audio file sampling rate Fix https://github.com/huggingface/datasets/issues/3662
lhoestq
https://github.com/huggingface/datasets/pull/3665
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3665", "html_url": "https://github.com/huggingface/datasets/pull/3665", "diff_url": "https://github.com/huggingface/datasets/pull/3665.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3665.patch", "merged_at": "2022-02-02T10:52...
true
1,121,233,301
3,664
[WIP] Return local paths to Common Voice
closed
[ "Cool thanks for giving it a try @anton-l ! \r\n\r\nWould be very much in favor of having \"real\" paths to the audio files again for non-streaming use cases. At the same time it would be nice to make the audio data loading script as understandable as possible so that the community can easily add audio datasets in ...
2022-02-01T21:48:27
2022-02-22T09:14:06
2022-02-22T09:14:06
Fixes https://github.com/huggingface/datasets/issues/3663 This is a proposed way of returning the old local file-based generator while keeping the new streaming generator intact. TODO: - [ ] brainstorm a bit more on https://github.com/huggingface/datasets/issues/3663 to see if we can do better - [ ] refactor th...
anton-l
https://github.com/huggingface/datasets/pull/3664
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3664", "html_url": "https://github.com/huggingface/datasets/pull/3664", "diff_url": "https://github.com/huggingface/datasets/pull/3664.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3664.patch", "merged_at": null }
true
1,121,067,647
3,663
[Audio] Path of Common Voice cannot be used for audio loading anymore
closed
[ "Having talked to @lhoestq, I see that this feature is no longer supported. \r\n\r\nI really don't think this was a good idea. It is a major breaking change and one for which we don't even have a working solution at the moment, which is bad for PyTorch as we don't want to force people to have `datasets` decode audi...
2022-02-01T18:40:10
2022-09-21T15:03:09
2022-09-21T14:56:22
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results ...
patrickvonplaten
https://github.com/huggingface/datasets/issues/3663
null
false
1,121,024,403
3,662
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates
closed
[ "Thanks @lhoestq for finding the reason of incorrect resampling. This issue affects all languages which have sound files with different sampling rates such as Turkish and Luganda.", "@cahya-wirawan - do you know how many languages have different sampling rates in Common Voice? I'm quite surprised to see this for ...
2022-02-01T17:55:04
2022-02-02T10:52:25
2022-02-02T10:52:25
The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio files with different sampling rates 32000 and 16000: ```python # first download a mp3 file with s...
lhoestq
https://github.com/huggingface/datasets/issues/3662
null
false
1,121,000,251
3,661
Remove unnecessary 'r' arg in
closed
[ "The CI failure is only because of the datasets is missing some sections in their cards - we can ignore that since it's unrelated to this PR" ]
2022-02-01T17:29:27
2022-02-07T16:57:27
2022-02-07T16:02:42
Originally from #3489
bryant1410
https://github.com/huggingface/datasets/pull/3661
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3661", "html_url": "https://github.com/huggingface/datasets/pull/3661", "diff_url": "https://github.com/huggingface/datasets/pull/3661.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3661.patch", "merged_at": "2022-02-07T16:02...
true
1,120,982,671
3,660
Change HTTP links to HTTPS
open
[]
2022-02-01T17:12:51
2022-09-21T15:16:32
null
I tested the links. I also fixed some typos. Originally from #3489
bryant1410
https://github.com/huggingface/datasets/pull/3660
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3660", "html_url": "https://github.com/huggingface/datasets/pull/3660", "diff_url": "https://github.com/huggingface/datasets/pull/3660.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3660.patch", "merged_at": null }
true
1,120,913,672
3,659
push_to_hub but preview not working
closed
[ "Hi @thomas-happify, please note that the preview may take some time before rendering the data.\r\n\r\nI've seen it is already working.\r\n\r\nI close this issue. Please feel free to reopen it if the problem arises again." ]
2022-02-01T16:23:57
2022-02-09T08:00:37
2022-02-09T08:00:37
## Dataset viewer issue for '*happifyhealth/twitter_pnn*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/happifyhealth/twitter_pnn)* I used ``` dataset.push_to_hub("happifyhealth/twitter_pnn") ``` but the preview is not working. Am I the one who added this dataset ? Yes
thomas-happify
https://github.com/huggingface/datasets/issues/3659
null
false
1,120,880,395
3,658
Dataset viewer issue for *P3*
closed
[ "The error is now:\r\n\r\n```\r\nStatus code: 400\r\nException: Status400Error\r\nMessage: this dataset is not supported for now.\r\n```\r\n\r\nWe've disabled the dataset viewer for several big datasets like this one. We hope being able to reenable it soon.", "The list of splits cannot be obtained. cc...
2022-02-01T15:57:56
2023-09-25T12:16:21
2023-09-25T12:16:21
## Dataset viewer issue for '*P3*' **Link: https://huggingface.co/datasets/bigscience/P3** ``` Status code: 400 Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. ``` Am I the one who added this dataset ? No
jeffistyping
https://github.com/huggingface/datasets/issues/3658
null
false
1,120,602,620
3,657
Extend dataset builder for streaming in `get_dataset_split_names`
closed
[ "I'm impatient to see if it has an impact on the number of valid datasets for the dataset viewer. For the record, today:\r\n\r\n<img width=\"660\" alt=\"Capture d’écran 2022-02-01 à 14 32 19\" src=\"https://user-images.githubusercontent.com/1676121/151977579-b5a239d9-6662-4aeb-bfd1-eef6b8249991.png\">\r\n", "Th...
2022-02-01T12:21:24
2022-02-03T22:49:06
2022-02-02T11:22:01
Currently, `get_dataset_split_names` doesn't extend a builder module to support streaming, even though it uses `StreamingDownloadManager` to download data. This PR fixes that. To test the change, run the following: ```bash pip install git+https://github.com/huggingface/datasets.git@fix-get_dataset_split_names-stre...
mariosasko
https://github.com/huggingface/datasets/pull/3657
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3657", "html_url": "https://github.com/huggingface/datasets/pull/3657", "diff_url": "https://github.com/huggingface/datasets/pull/3657.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3657.patch", "merged_at": "2022-02-02T11:22...
true
1,120,510,823
3,656
checksum error subjqa dataset
closed
[ "Hi @RensDimmendaal, \r\n\r\nI'm sorry but I can't reproduce your bug:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset(\"subjqa\", \"electronics\")\r\nDownloading builder script: 9.15kB [00:00, 4.10MB/s] ...
2022-02-01T10:53:33
2022-02-10T10:56:59
2022-02-10T10:56:38
## Describe the bug I get a checksum error when loading the `subjqa` dataset (used in the transformers book). ## Steps to reproduce the bug ```python from datasets import load_dataset subjqa = load_dataset("subjqa","electronics") ``` ## Expected results Loading the dataset ## Actual results ``` ---...
RensDimmendaal
https://github.com/huggingface/datasets/issues/3656
null
false
1,119,801,077
3,655
Pubmed dataset not reachable
closed
[ "Hi @abhi-mosaic, thanks for reporting.\r\n\r\nI'm looking at it... ", "also hitting this issue", "Hey @albertvillanova, sorry to reopen this... I can confirm that on `master` branch the dataset is downloadable now but it is still broken in streaming mode:\r\n\r\n```python\r\n >>> import datasets\r\n >>> pubmed...
2022-01-31T18:45:47
2022-12-19T19:18:10
2022-02-14T14:15:41
## Describe the bug Trying to use the `pubmed` dataset fails to reach / download the source files. ## Steps to reproduce the bug ```python pubmed_train = datasets.load_dataset('pubmed', split='train') ``` ## Expected results Should begin downloading the pubmed dataset. ## Actual results ``` ConnectionEr...
abhi-mosaic
https://github.com/huggingface/datasets/issues/3655
null
false
1,119,717,475
3,654
Better TQDM output
closed
[ "@lhoestq I've created a notebook for you to see the difference: https://colab.research.google.com/drive/1by3EqnoKvC2p-yKW4lPDGOFOZHyGVyeQ?usp=sharing.\r\n\r\nFeel free to suggest better descriptions for the progress bars. \r\n\r\nIf everything looks good, think we can merge." ]
2022-01-31T17:22:43
2022-02-03T15:55:34
2022-02-03T15:55:33
This PR does the following: * if `dataset_infos.json` exists for a dataset, uses `num_examples` to print the total number of examples that needs to be generated (in `builder.py`) * fixes `tqdm` + multiprocessing in Jupyter Notebook/Colab (the issue stems from this commit in the `tqdm` repo: https://github.com/tqdm/tq...
mariosasko
https://github.com/huggingface/datasets/pull/3654
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3654", "html_url": "https://github.com/huggingface/datasets/pull/3654", "diff_url": "https://github.com/huggingface/datasets/pull/3654.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3654.patch", "merged_at": "2022-02-03T15:55...
true
1,119,186,952
3,653
`to_json` in multiprocessing fashion sometimes deadlock
open
[]
2022-01-31T09:35:07
2022-01-31T09:35:07
null
## Describe the bug `to_json` in multiprocessing fashion sometimes deadlock, instead of raising exceptions. Temporary solution is to see that it deadlocks, and then reduce the number of processes or batch size in order to reduce the memory footprint. As @lhoestq pointed out, this might be related to https://bugs....
thomasw21
https://github.com/huggingface/datasets/issues/3653
null
false
1,118,808,738
3,652
sp. Columbia => Colombia
closed
[ "The original openslr site mixed both names https://openslr.org/72/ :-)", "Yeah, I filed the issue to have it fixed there last year, but it looks like they missed a few." ]
2022-01-31T00:41:03
2022-02-09T16:55:25
2022-01-31T08:29:07
"Columbia" is various places in North America. The country is "Colombia".
serapio
https://github.com/huggingface/datasets/pull/3652
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3652", "html_url": "https://github.com/huggingface/datasets/pull/3652", "diff_url": "https://github.com/huggingface/datasets/pull/3652.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3652.patch", "merged_at": "2022-01-31T08:29...
true
1,118,597,647
3,651
Update link in wiki_bio dataset
closed
[ "> all the tests pass, but I'm still not able to import the dataset\r\n\r\nSince it's not merged on `master` yet, you have to provide the path to your local `wiki_bio.py` to use it.\r\nIndeed the library downloads the dataset files from `master` if you have a dev installation of the library.\r\n\r\nI agree it would...
2022-01-30T16:28:54
2022-01-31T14:50:48
2022-01-31T08:38:09
Fixes #3580 and makes the wiki_bio dataset work again. I changed the link and some documentation, and all the tests pass. Thanks @lhoestq for uploading the dataset to the HuggingFace data bucket. @lhoestq -- all the tests pass, but I'm still not able to import the dataset, as the old Google Drive link is cached some...
jxmorris12
https://github.com/huggingface/datasets/pull/3651
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3651", "html_url": "https://github.com/huggingface/datasets/pull/3651", "diff_url": "https://github.com/huggingface/datasets/pull/3651.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3651.patch", "merged_at": "2022-01-31T08:38...
true
1,118,537,429
3,650
Allow 'to_json' to run in unordered fashion in order to lower memory footprint
closed
[ "Hi @thomasw21, I remember suggesting `imap_unordered` to @lhoestq at that time to speed up `to_json` further but after trying `pool_imap` on multiple datasets (>9GB) , memory utilisation was almost constant and we decided to go ahead with that only. \r\n\r\n1. Did you try this without `gzip`? Because `gzip` featu...
2022-01-30T13:23:19
2023-09-25T06:28:51
2023-09-24T16:45:48
I'm using `to_json(..., num_proc=num_proc, compressiong='gzip')` with `num_proc>1`. I'm having an issue where things seem to deadlock at some point. Eventually I see OOM. I'm guessing it's an issue where one process starts to take a long time for a specific batch, and so other process keep accumulating their results in...
thomasw21
https://github.com/huggingface/datasets/pull/3650
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3650", "html_url": "https://github.com/huggingface/datasets/pull/3650", "diff_url": "https://github.com/huggingface/datasets/pull/3650.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3650.patch", "merged_at": null }
true
1,117,502,250
3,649
Add IGLUE dataset
open
[]
2022-01-28T14:59:41
2022-01-28T15:02:35
null
## Adding a Dataset - **Name:** IGLUE - **Description:** IGLUE brings together 4 vision-and-language tasks across 20 languages (Twitter [thread](https://twitter.com/ebugliarello/status/1487045497583976455?s=20&t=SB4LZGDhhkUW83ugcX_m5w)) - **Paper:** https://arxiv.org/abs/2201.11732 - **Data:** https://github.com/e-...
lewtun
https://github.com/huggingface/datasets/issues/3649
null
false
1,117,465,505
3,648
Fix Windows CI: bump python to 3.7
closed
[]
2022-01-28T14:24:54
2022-01-28T14:40:39
2022-01-28T14:40:39
Python>=3.7 is needed to install `tokenizers` 0.11
lhoestq
https://github.com/huggingface/datasets/pull/3648
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3648", "html_url": "https://github.com/huggingface/datasets/pull/3648", "diff_url": "https://github.com/huggingface/datasets/pull/3648.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3648.patch", "merged_at": "2022-01-28T14:40...
true
1,117,383,675
3,647
Fix `add_column` on datasets with indices mapping
closed
[ "Sure, let's include this in today's release.", "Cool ! The windows CI should be fixed on master now, feel free to merge :)" ]
2022-01-28T13:06:29
2022-01-28T15:35:58
2022-01-28T15:35:58
My initial idea was to avoid the `flatten_indices` call and reorder a new column instead, but in the end I decided to follow `concatenate_datasets` and use `flatten_indices` to avoid padding when `dataset._indices.num_rows != dataset._data.num_rows`. Fix #3599
mariosasko
https://github.com/huggingface/datasets/pull/3647
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3647", "html_url": "https://github.com/huggingface/datasets/pull/3647", "diff_url": "https://github.com/huggingface/datasets/pull/3647.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3647.patch", "merged_at": "2022-01-28T15:35...
true
1,116,544,627
3,646
Fix streaming datasets that are not reset correctly
closed
[ "Works smoothly with the `transformers.Trainer` class now, thank you!" ]
2022-01-27T17:21:02
2022-01-28T16:34:29
2022-01-28T16:34:28
Streaming datasets that use `StreamingDownloadManager.iter_archive` and `StreamingDownloadManager.iter_files` had some issues. Indeed if you try to iterate over such dataset twice, then the second time it will be empty. This is because the two methods above are generator functions. I fixed this by making them return...
lhoestq
https://github.com/huggingface/datasets/pull/3646
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3646", "html_url": "https://github.com/huggingface/datasets/pull/3646", "diff_url": "https://github.com/huggingface/datasets/pull/3646.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3646.patch", "merged_at": "2022-01-28T16:34...
true
1,116,541,298
3,645
Streaming dataset based on dl_manager.iter_archive/iter_files are not reset correctly
closed
[]
2022-01-27T17:17:41
2022-01-28T16:34:28
2022-01-28T16:34:28
Hi ! When iterating over a streaming dataset once, it's not reset correctly because of some issues with `dl_manager.iter_archive` and `dl_manager.iter_files`. Indeed they are generator functions (so the iterator that is returned can be exhausted). They should be iterables instead, and be reset if we do a for loop again...
lhoestq
https://github.com/huggingface/datasets/issues/3645
null
false
1,116,519,670
3,644
Add a GROUP BY operator
open
[ "Hi ! At the moment you can use `to_pandas()` to get a pandas DataFrame that supports `group_by` operations (make sure your dataset fits in memory though)\r\n\r\nWe use Arrow as a back-end for `datasets` and it doesn't have native group by (see https://github.com/apache/arrow/issues/2189) unfortunately.\r\n\r\nI ju...
2022-01-27T16:57:54
2025-01-28T11:39:48
null
**Is your feature request related to a problem? Please describe.** Using batch mapping, we can easily split examples. However, we lack an appropriate option for merging them back together by some key. Consider this example: ```python # features: # { # "example_id": datasets.Value("int32"), # "text": datas...
felix-schneider
https://github.com/huggingface/datasets/issues/3644
null
false
1,116,417,428
3,643
Fix sem_eval_2018_task_1 download location
closed
[ "I fixed those two things, the two remaining failing checks seem to be due to some dependency missing in the tests." ]
2022-01-27T15:45:00
2022-02-04T15:15:26
2022-02-04T15:15:26
As discussed with @lhoestq in https://github.com/huggingface/datasets/issues/3549#issuecomment-1020176931_ this is the new pull request to fix the download location.
maxpel
https://github.com/huggingface/datasets/pull/3643
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3643", "html_url": "https://github.com/huggingface/datasets/pull/3643", "diff_url": "https://github.com/huggingface/datasets/pull/3643.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3643.patch", "merged_at": "2022-02-04T15:15...
true
1,116,306,986
3,642
Fix dataset slicing with negative bounds when indices mapping is not `None`
closed
[]
2022-01-27T14:45:53
2022-01-27T18:16:23
2022-01-27T18:16:22
Fix #3611
mariosasko
https://github.com/huggingface/datasets/pull/3642
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3642", "html_url": "https://github.com/huggingface/datasets/pull/3642", "diff_url": "https://github.com/huggingface/datasets/pull/3642.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3642.patch", "merged_at": "2022-01-27T18:16...
true
1,116,284,268
3,641
Fix numpy rngs when seed is None
closed
[]
2022-01-27T14:29:09
2022-01-27T18:16:08
2022-01-27T18:16:07
Fixes the NumPy RNG when `seed` is `None`. The problem becomes obvious after reading the NumPy notes on RNG (returned by `np.random.get_state()`): > The MT19937 state vector consists of a 624-element array of 32-bit unsigned integers plus a single integer value between 0 and 624 that indexes the current position wi...
mariosasko
https://github.com/huggingface/datasets/pull/3641
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3641", "html_url": "https://github.com/huggingface/datasets/pull/3641", "diff_url": "https://github.com/huggingface/datasets/pull/3641.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3641.patch", "merged_at": "2022-01-27T18:16...
true
1,116,133,769
3,640
Issues with custom dataset in Wav2Vec2
closed
[ "Closed and moved to transformers." ]
2022-01-27T12:09:05
2022-01-27T12:29:48
2022-01-27T12:29:48
We are training Vav2Vec using the run_speech_recognition_ctc_bnb.py-script. This is working fine with Common Voice, however using our custom dataset and data loader at [NbAiLab/NPSC]( https://huggingface.co/datasets/NbAiLab/NPSC) it crashes after roughly 1 epoch with the following stack trace: ![image](https://us...
peregilk
https://github.com/huggingface/datasets/issues/3640
null
false
1,116,021,420
3,639
same value of precision, recall, f1 score at each epoch for classification task.
closed
[ "Hi @Dhanachandra, \r\n\r\nWe have tests for all our metrics and they work as expected: under the hood, we use scikit-learn implementations.\r\n\r\nMaybe the cause is somewhere else. For example:\r\n- Is it a binary or a multiclass or a multilabel classification? Default computation of these metrics is for binary c...
2022-01-27T10:14:16
2022-02-24T09:02:18
2022-02-24T09:02:17
**1st Epoch:** 1/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.59it/s] 01/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/precision/default/default_experiment-1-0.arrow 01/27/2022 09:3...
Dhanachandra
https://github.com/huggingface/datasets/issues/3639
null
false
1,115,725,703
3,638
AutoTokenizer hash value got change after datasets.map
open
[ "This issue was original reported at https://github.com/huggingface/transformers/issues/14931 and It seems like this issue also occur with other AutoClass like AutoFeatureExtractor.", "Thanks for moving the issue here !\r\n\r\nI wasn't able to reproduce the issue on my env (the hashes stay the same):\r\n```\r\n- ...
2022-01-27T03:19:03
2024-03-11T13:56:15
null
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tok...
tshu-w
https://github.com/huggingface/datasets/issues/3638
null
false
1,115,526,438
3,637
[TypeError: Couldn't cast array of type] Cannot load dataset in v1.18
closed
[ "Hi @lewtun!\r\n \r\nThis one was tricky to debug. Initially, I tought there is a bug in the recently-added (by @lhoestq ) `cast_array_to_feature` function because `git bisect` points to the https://github.com/huggingface/datasets/commit/6ca96c707502e0689f9b58d94f46d871fa5a3c9c commit. Then, I noticed that the feat...
2022-01-26T21:38:02
2022-02-09T16:15:53
2022-02-09T16:15:53
## Describe the bug I am trying to load the [`GEM/RiSAWOZ` dataset](https://huggingface.co/datasets/GEM/RiSAWOZ) in `datasets` v1.18.1 and am running into a type error when casting the features. The strange thing is that I can load the dataset with v1.17.0. Note that the error is also present if I install from `master...
lewtun
https://github.com/huggingface/datasets/issues/3637
null
false
1,115,362,702
3,636
Update index.rst
closed
[]
2022-01-26T18:43:09
2022-01-26T18:44:55
2022-01-26T18:44:54
null
VioletteLepercq
https://github.com/huggingface/datasets/pull/3636
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3636", "html_url": "https://github.com/huggingface/datasets/pull/3636", "diff_url": "https://github.com/huggingface/datasets/pull/3636.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3636.patch", "merged_at": "2022-01-26T18:44...
true
1,115,333,219
3,635
Make `ted_talks_iwslt` dataset streamable
closed
[ "Thanks for adding this @mariosasko! It worked for me when running it with a local data file, however, when using the file on Google Drive I get the following error:\r\n```Python\r\nds = load_dataset(\"./ted_talks_iwslt\",\"eu_ca_2014\", streaming=True, split=\"train\", use_auth_token=True)\r\nnext(iter(ds))\r\n```...
2022-01-26T18:07:56
2022-10-04T09:36:23
2022-10-03T09:44:47
null
mariosasko
https://github.com/huggingface/datasets/pull/3635
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3635", "html_url": "https://github.com/huggingface/datasets/pull/3635", "diff_url": "https://github.com/huggingface/datasets/pull/3635.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3635.patch", "merged_at": null }
true
1,115,133,279
3,634
Dataset.shuffle(seed=None) gives fixed row permutation
closed
[ "I'm not sure if this is expected behavior.\r\n\r\nAm I supposed to work with a copy of the dataset, i.e. `shuffled_dataset = data.shuffle(seed=None)`?\r\n\r\n```diff\r\nimport datasets\r\n\r\n# Some toy example\r\ndata = datasets.Dataset.from_dict(\r\n {\"feature\": [1, 2, 3, 4, 5], \"label\": [\"a\", \"b\", \"...
2022-01-26T15:13:08
2022-01-27T18:16:07
2022-01-27T18:16:07
## Describe the bug Repeated attempts to `shuffle` a dataset without specifying a seed give the same results. ## Steps to reproduce the bug ```python import datasets # Some toy example data = datasets.Dataset.from_dict( {"feature": [1, 2, 3, 4, 5], "label": ["a", "b", "c", "d", "e"]} ) # Doesn't work...
elisno
https://github.com/huggingface/datasets/issues/3634
null
false
1,115,040,174
3,633
Mirror canonical datasets in prod
closed
[]
2022-01-26T13:49:37
2022-01-26T13:56:21
2022-01-26T13:56:21
Push the datasets changes to the Hub in production by setting `HF_USE_PROD=1` I also added a fix that makes the script ignore the json, csv, text, parquet and pandas dataset builders. cc @SBrandeis
lhoestq
https://github.com/huggingface/datasets/pull/3633
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3633", "html_url": "https://github.com/huggingface/datasets/pull/3633", "diff_url": "https://github.com/huggingface/datasets/pull/3633.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3633.patch", "merged_at": "2022-01-26T13:56...
true
1,115,027,185
3,632
Adding CC-100: Monolingual Datasets from Web Crawl Data (Datasets links are invalid)
closed
[ "Hi @AnzorGozalishvili,\r\n\r\nMaybe their site was temporarily down, but it seems to work fine now.\r\n\r\nCould you please try again and confirm if the problem persists? ", "Hi @albertvillanova \r\nI checked and it works. \r\nIt seems that it was really temporarily down.\r\nThanks!" ]
2022-01-26T13:35:37
2022-02-10T06:58:11
2022-02-10T06:58:11
## Describe the bug The dataset links are no longer valid for CC-100. It seems that the website which was keeping these files are no longer accessible and therefore this dataset became unusable. Check out the dataset [homepage](http://data.statmt.org/cc-100/) which isn't accessible. Also the URLs for dataset file ...
AnzorGozalishvili
https://github.com/huggingface/datasets/issues/3632
null
false
1,114,833,662
3,631
Labels conflict when loading a local CSV file.
closed
[ "Hi @pichljan, thanks for reporting.\r\n\r\nThis should be fixed. I'm looking at it. " ]
2022-01-26T10:00:33
2022-02-11T23:02:31
2022-02-11T23:02:31
## Describe the bug I am trying to load a local CSV file with a separate file containing label names. It is successfully loaded for the first time, but when I try to load it again, there is a conflict between provided labels and the cached dataset info. Disabling caching globally and/or using `download_mode="force_red...
pichljan
https://github.com/huggingface/datasets/issues/3631
null
false
1,114,578,625
3,630
DuplicatedKeysError of NewsQA dataset
closed
[ "Thanks for reporting, @StevenTang1998.\r\n\r\nI'm fixing it. " ]
2022-01-26T03:05:49
2022-02-14T08:37:19
2022-02-14T08:37:19
After processing the dataset following official [NewsQA](https://github.com/Maluuba/newsqa), I used datasets to load it: ``` a = load_dataset('newsqa', data_dir='news') ``` and the following error occurred: ``` Using custom data configuration default-data_dir=news Downloading and preparing dataset newsqa/defaul...
StevenTang1998
https://github.com/huggingface/datasets/issues/3630
null
false
1,113,971,575
3,629
Fix Hub repos update when there's a new release
closed
[]
2022-01-25T14:39:45
2022-01-25T14:55:46
2022-01-25T14:55:46
It was not listing the full list of datasets correctly cc @SBrandeis this is why it failed for 1.18.0 We should be good now !
lhoestq
https://github.com/huggingface/datasets/pull/3629
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3629", "html_url": "https://github.com/huggingface/datasets/pull/3629", "diff_url": "https://github.com/huggingface/datasets/pull/3629.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3629.patch", "merged_at": "2022-01-25T14:55...
true
1,113,930,644
3,628
Dataset Card Creator drops information for "Additional Information" Section
open
[]
2022-01-25T14:06:17
2022-01-25T14:09:01
null
First of all, the card creator is a great addition and really helpful for streamlining dataset cards! ## Describe the bug I encountered an inconvenient bug when entering "Additional Information" in the react app, which drops already entered text when switching to a previous section, and then back again to "Addition...
dennlinger
https://github.com/huggingface/datasets/issues/3628
null
false
1,113,556,837
3,627
Fix host URL in The Pile datasets
closed
[ "We should also update the `bookcorpusopen` download url (see #3561) , no? ", "For `the_pile_openwebtext2` and `the_pile_stack_exchange` I did not regenerate the JSON files, but instead I just changed the download_checksums URL. ", "Seems like the mystic URL is now broken and the original should be used. ", "...
2022-01-25T08:11:28
2022-07-20T20:54:42
2022-02-14T08:40:58
This PR fixes the host URL in The Pile datasets, once they have mirrored their data in another server. Fix #3626.
albertvillanova
https://github.com/huggingface/datasets/pull/3627
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3627", "html_url": "https://github.com/huggingface/datasets/pull/3627", "diff_url": "https://github.com/huggingface/datasets/pull/3627.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3627.patch", "merged_at": "2022-02-14T08:40...
true
1,113,534,436
3,626
The Pile cannot connect to host
closed
[]
2022-01-25T07:43:33
2022-02-14T08:40:58
2022-02-14T08:40:58
## Describe the bug The Pile had issues with their previous host server and have mirrored its content to another server. The new URL server should be updated.
albertvillanova
https://github.com/huggingface/datasets/issues/3626
null
false
1,113,017,522
3,625
Add a metadata field for when source data was produced
open
[ "A question to the datasets maintainers: is there a policy about how the set of allowed metadata fields is maintained and expanded?\r\n\r\nMetadata are very important, but defining the standard is always a struggle between allowing exhaustivity without being too complex. Archivists have Dublin Core, open data has h...
2022-01-24T18:52:39
2022-06-28T13:54:49
null
**Is your feature request related to a problem? Please describe.** The current problem is that information about when source data was produced is not easily visible. Though there are a variety of metadata fields available in the dataset viewer, time period information is not included. This feature request suggests mak...
davanstrien
https://github.com/huggingface/datasets/issues/3625
null
false
1,112,835,239
3,623
Extend support for streaming datasets that use os.path.relpath
closed
[]
2022-01-24T16:00:52
2022-02-04T14:03:55
2022-02-04T14:03:54
This PR extends the support in streaming mode for datasets that use `os.path.relpath`, by patching that function. This feature will also be useful to yield the relative path of audio or image files, within an archive or parent dir. Close #3622.
albertvillanova
https://github.com/huggingface/datasets/pull/3623
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3623", "html_url": "https://github.com/huggingface/datasets/pull/3623", "diff_url": "https://github.com/huggingface/datasets/pull/3623.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3623.patch", "merged_at": "2022-02-04T14:03...
true
1,112,831,661
3,622
Extend support for streaming datasets that use os.path.relpath
closed
[]
2022-01-24T15:58:23
2022-02-04T14:03:54
2022-02-04T14:03:54
Extend support for streaming datasets that use `os.path.relpath`. This feature will also be useful to yield the relative path of audio or image files.
albertvillanova
https://github.com/huggingface/datasets/issues/3622
null
false
1,112,720,434
3,621
Consider adding `ipywidgets` as a dependency.
closed
[ "Hi! We use `tqdm` to display progress bars, so I suggest you open this issue in their repo.", "It depends on how you use `tqdm`, no? \r\n\r\nDoesn't this library import via; \r\n\r\n```\r\nfrom tqdm.notebook import tqdm\r\n```", "Hi! Sorry for the late reply. We import `tqdm` as `from tqdm.auto import tqdm`, w...
2022-01-24T14:27:11
2022-02-24T09:04:36
2022-02-24T09:04:36
When I install `datasets` in a fresh virtualenv with jupyterlab I always see this error. ``` ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` It's a bit of a nuisance, because I need to run shut down the jupyterlab ser...
koaning
https://github.com/huggingface/datasets/issues/3621
null
false
1,112,677,252
3,620
Add Fon language tag
closed
[]
2022-01-24T13:52:26
2022-02-04T14:04:36
2022-02-04T14:04:35
Add Fon language tag to resources.
albertvillanova
https://github.com/huggingface/datasets/pull/3620
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3620", "html_url": "https://github.com/huggingface/datasets/pull/3620", "diff_url": "https://github.com/huggingface/datasets/pull/3620.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3620.patch", "merged_at": "2022-02-04T14:04...
true
1,112,611,415
3,619
fix meta in mls
closed
[ "Feel free to merge @polinaeterna as soon as you got an approval from either @lhoestq , @albertvillanova or @mariosasko" ]
2022-01-24T12:54:38
2022-01-24T20:53:22
2022-01-24T20:53:22
`monolingual` value of `m ultilinguality` param in yaml meta was changed to `multilingual` :)
polinaeterna
https://github.com/huggingface/datasets/pull/3619
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3619", "html_url": "https://github.com/huggingface/datasets/pull/3619", "diff_url": "https://github.com/huggingface/datasets/pull/3619.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3619.patch", "merged_at": "2022-01-24T20:53...
true
1,112,123,365
3,618
TIMIT Dataset not working with GPU
closed
[ "Hi ! I think you should avoid calling `timit_train['audio']`. Indeed by doing so you're **loading all the audio column in memory**. This is problematic in your case because the TIMIT dataset is huge.\r\n\r\nIf you want to access the audio data of some samples, you should do this instead `timit_train[:10][\"train\"...
2022-01-24T03:26:03
2023-07-25T15:20:20
2023-07-25T15:20:20
## Describe the bug I am working trying to use the TIMIT dataset in order to fine-tune Wav2Vec2 model and I am unable to load the "audio" column from the dataset when working with a GPU. I am working on Amazon Sagemaker Studio, on the Python 3 (PyTorch 1.8 Python 3.6 GPU Optimized) environment, with a single ml.g4...
TheSeamau5
https://github.com/huggingface/datasets/issues/3618
null
false
1,111,938,691
3,617
PR for the CFPB Consumer Complaints dataset
closed
[ "> Nice ! Thanks for adding this dataset :)\n> \n> \n> \n> I left a few comments:\n\nThanks!\n\nI'd be interested in contributing to the core codebase - I had to go down the custom loading approach because I couldn't pull this dataset in using the load_dataset() method. Using either the json or csv files available ...
2022-01-23T17:47:12
2022-02-07T21:08:31
2022-02-07T21:08:31
Think I followed all the steps but please let me know if anything needs changing or any improvements I can make to the code quality
kayvane1
https://github.com/huggingface/datasets/pull/3617
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3617", "html_url": "https://github.com/huggingface/datasets/pull/3617", "diff_url": "https://github.com/huggingface/datasets/pull/3617.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3617.patch", "merged_at": "2022-02-07T21:08...
true
1,111,587,861
3,616
Make streamable the BnL Historical Newspapers dataset
closed
[]
2022-01-22T14:52:36
2022-02-04T14:05:23
2022-02-04T14:05:21
I've refactored the code in order to make the dataset streamable and to avoid it takes too long: - I've used `iter_files` Close #3615
albertvillanova
https://github.com/huggingface/datasets/pull/3616
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3616", "html_url": "https://github.com/huggingface/datasets/pull/3616", "diff_url": "https://github.com/huggingface/datasets/pull/3616.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3616.patch", "merged_at": "2022-02-04T14:05...
true
1,111,576,876
3,615
Dataset BnL Historical Newspapers does not work in streaming mode
closed
[ "@albertvillanova let me know if there is anything I can do to help with this. I had a quick look at the code again and though I could try the following changes:\r\n- use `download` instead of `download_and_extract`\r\nhttps://github.com/huggingface/datasets/blob/d3d339fb86d378f4cb3c5d1de423315c07a466c6/datasets/bn...
2022-01-22T14:12:59
2022-02-04T14:05:21
2022-02-04T14:05:21
## Describe the bug When trying to load in streaming mode, it "hangs"... ## Steps to reproduce the bug ```python ds = load_dataset("bnl_newspapers", split="train", streaming=True) ``` ## Expected results The code should be optimized, so that it works fast in streaming mode. CC: @davanstrien
albertvillanova
https://github.com/huggingface/datasets/issues/3615
null
false
1,110,736,657
3,614
Minor fixes
closed
[]
2022-01-21T17:48:44
2022-01-24T12:45:49
2022-01-24T12:45:49
This PR: * adds "desc" to the `ignore_kwargs` list in `Dataset.filter` * fixes the default value of `id` in `DatasetDict.prepare_for_task`
mariosasko
https://github.com/huggingface/datasets/pull/3614
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3614", "html_url": "https://github.com/huggingface/datasets/pull/3614", "diff_url": "https://github.com/huggingface/datasets/pull/3614.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3614.patch", "merged_at": "2022-01-24T12:45...
true
1,110,684,015
3,613
Files not updating in dataset viewer
closed
[ "Yes. The jobs queue is full right now, following an upgrade... Back to normality in the next hours hopefully. I'll look at your datasets to be sure the dataset viewer works as expected on them.", "Should have been fixed now." ]
2022-01-21T16:47:20
2022-01-22T08:13:13
2022-01-22T08:13:13
## Dataset viewer issue for '*name of the dataset*' **Link:** Some examples: * https://huggingface.co/datasets/abidlabs/crowdsourced-speech4 * https://huggingface.co/datasets/abidlabs/test-audio-13 *short description of the issue* It seems that the dataset viewer is reading a cached version of the dataset and...
abidlabs
https://github.com/huggingface/datasets/issues/3613
null
false
1,110,506,466
3,612
wikifix
closed
[ "tests fail because of dataset_infos.json isn't updated. Unfortunately, I cannot get the datasets-cli locally to execute without error. Would need to troubleshoot, what's missing. Maybe someone else can pick up the stick. ", "Hi ! If we change the default date to the latest one, users won't be able to load the \"...
2022-01-21T14:05:11
2022-02-03T17:58:16
2022-02-03T17:58:16
This should get the wikipedia dataloading script back up and running - at least I hope so (tested with language ff and ii)
apergo-ai
https://github.com/huggingface/datasets/pull/3612
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3612", "html_url": "https://github.com/huggingface/datasets/pull/3612", "diff_url": "https://github.com/huggingface/datasets/pull/3612.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3612.patch", "merged_at": null }
true
1,110,399,096
3,611
Indexing bug after dataset.select()
closed
[ "Hi! Thanks for reporting! I've opened a PR with the fix." ]
2022-01-21T12:09:30
2022-01-27T18:16:22
2022-01-27T18:16:22
## Describe the bug A clear and concise description of what the bug is. Dataset indexing is not working as expected after `dataset.select(range(100))` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import datasets task_to_keys = { "cola": ("sentence", None), "mnli":...
kamalkraj
https://github.com/huggingface/datasets/issues/3611
null
false
1,109,777,314
3,610
Checksum error when trying to load amazon_review dataset
closed
[ "It is solved now" ]
2022-01-20T21:20:32
2022-01-21T13:22:31
2022-01-21T13:22:31
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug I am getting the issue when trying to load dataset using ``` dataset = load_dataset("amazon_polarity") ``` ## Expected results dataset loaded ## Actual results ``` -------------------------------------...
ghost
https://github.com/huggingface/datasets/issues/3610
null
false
1,109,579,112
3,609
Fixes to pubmed dataset download function
closed
[ "Hi ! I think we can simply add a new configuration for the 2022 data instead of replacing them.\r\nYou can add the new configuration here:\r\n```python\r\n BUILDER_CONFIGS = [\r\n datasets.BuilderConfig(name=\"2021\", description=\"The 2021 annual record\", version=datasets.Version(\"1.0.0\")),\r\n ...
2022-01-20T17:31:35
2022-03-03T16:18:52
2022-03-03T14:23:35
Pubmed has updated its settings for 2022 and thus existing download script does not work.
spacemanidol
https://github.com/huggingface/datasets/pull/3609
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3609", "html_url": "https://github.com/huggingface/datasets/pull/3609", "diff_url": "https://github.com/huggingface/datasets/pull/3609.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3609.patch", "merged_at": null }
true
1,109,310,981
3,608
Add support for continuous metrics (RMSE, MAE)
closed
[ "Hey @ck37 \r\n\r\nYou can always use a custom metric as explained [in this guide from HF](https://huggingface.co/docs/datasets/master/loading_metrics.html#using-a-custom-metric-script).\r\n\r\nIf this issue needs to be contributed to (for enhancing the metric API) I think [this link](https://scikit-learn.org/stabl...
2022-01-20T13:35:36
2022-03-09T17:18:20
2022-03-09T17:18:20
**Is your feature request related to a problem? Please describe.** I am uploading our dataset and models for the "Constructing interval measures" method we've developed, which uses item response theory to convert multiple discrete labels into a continuous spectrum for hate speech. Once we have this outcome our NLP m...
ck37
https://github.com/huggingface/datasets/issues/3608
null
false
1,109,218,370
3,607
Add MIT Scene Parsing Benchmark
closed
[]
2022-01-20T12:03:07
2022-02-18T12:51:01
2022-02-18T12:51:00
Add MIT Scene Parsing Benchmark (a subset of ADE20k). TODOs: * [x] add dummy data * [x] add dataset card * [x] generate `dataset_info.json`
mariosasko
https://github.com/huggingface/datasets/pull/3607
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3607", "html_url": "https://github.com/huggingface/datasets/pull/3607", "diff_url": "https://github.com/huggingface/datasets/pull/3607.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3607.patch", "merged_at": "2022-02-18T12:51...
true
1,108,918,701
3,606
audio column not saved correctly after resampling
closed
[ "Hi ! We just released a new version of `datasets` that should fix this.\r\n\r\nI tested resampling and using save/load_from_disk afterwards and it seems to be fixed now", "Hi @lhoestq, \r\n\r\nJust tested the latest datasets version, and confirming that this is fixed for me. \r\n\r\nThanks!", "Also, just an FY...
2022-01-20T06:37:10
2022-01-23T01:41:01
2022-01-23T01:24:14
## Describe the bug After resampling the audio column, saving with save_to_disk doesn't seem to save with the correct type. ## Steps to reproduce the bug - load a subset of common voice dataset (48Khz) - resample audio column to 16Khz - save with save_to_disk() - load with load_from_disk() ## Expected resul...
laphang
https://github.com/huggingface/datasets/issues/3606
null
false
1,108,738,561
3,605
Adding Turkic X-WMT evaluation set for machine translation
closed
[ "hi! Thank you for all the comments! I believe I addressed them all. Let me know if there is anything else", "Hi there! I was wondering if there is anything else to change before this can be merged", "@lhoestq Hi! Just a gentle reminder about the steps to merge this one! ", "Thanks for the heads up ! I think ...
2022-01-20T01:40:29
2022-01-31T09:50:57
2022-01-31T09:50:57
This dataset is a human-translated evaluation set for MT crowdsourced and provided by the [Turkic Interlingua ](turkic-interlingua.org) community. It contains eval sets for 8 Turkic languages covering 88 language directions. Languages being covered are: Azerbaijani (az) Bashkir (ba) English (en) Karakalpak (kaa) ...
mirzakhalov
https://github.com/huggingface/datasets/pull/3605
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3605", "html_url": "https://github.com/huggingface/datasets/pull/3605", "diff_url": "https://github.com/huggingface/datasets/pull/3605.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3605.patch", "merged_at": "2022-01-31T09:50...
true
1,108,477,316
3,604
Dataset Viewer not showing Previews for Private Datasets
closed
[ "Sure, it's on the roadmap.", "Closing in favor of https://github.com/huggingface/datasets-server/issues/39." ]
2022-01-19T19:29:26
2022-09-26T08:04:43
2022-09-26T08:04:43
## Dataset viewer issue for 'abidlabs/test-audio-13' It seems that the dataset viewer does not show previews for `private` datasets, even for the user who's private dataset it is. See [1] for example. If I change the visibility to public, then it does show, but it would be useful to have the viewer even for private ...
abidlabs
https://github.com/huggingface/datasets/issues/3604
null
false
1,108,392,141
3,603
Add British Library books dataset
closed
[ "Thanks for all the help and suggestions\r\n\r\n> Since the dataset has a very specific structure it might not be that easy so feel free to ping me if you have questions or if I can help !\r\n\r\nI did get a little stuck here! So far I have created directories for each config i.e:\r\n\r\n`datasets/datasets/blbooks/...
2022-01-19T17:53:05
2022-01-31T17:22:51
2022-01-31T17:01:49
This pull request adds a dataset of text from digitised (primarily 19th Century) books from the British Library. This collection has previously been used for training language models, e.g. https://github.com/dbmdz/clef-hipe/blob/main/hlms.md. It would be nice to make this dataset more accessible for others to use throu...
davanstrien
https://github.com/huggingface/datasets/pull/3603
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3603", "html_url": "https://github.com/huggingface/datasets/pull/3603", "diff_url": "https://github.com/huggingface/datasets/pull/3603.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3603.patch", "merged_at": "2022-01-31T17:01...
true
1,108,247,870
3,602
Update url for conll2003
closed
[ "Hi. lhoestq \r\n\r\n![image](https://user-images.githubusercontent.com/21982975/150345097-154f2b1a-bb12-47af-bddf-40eec0a0dadb.png)\r\nWhat is the solution for it?\r\nyou can see it is still doesn't work here.\r\nhttps://colab.research.google.com/drive/1l52FGWuSaOaGYchit4CbmtUSuzNDx_Ok?usp=sharing\r\nThank you.\r\...
2022-01-19T15:35:04
2022-01-20T16:23:03
2022-01-19T15:43:53
Following https://github.com/huggingface/datasets/issues/3582 I'm changing the download URL of the conll2003 data files, since the previous host doesn't have the authorization to redistribute the data
lhoestq
https://github.com/huggingface/datasets/pull/3602
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3602", "html_url": "https://github.com/huggingface/datasets/pull/3602", "diff_url": "https://github.com/huggingface/datasets/pull/3602.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3602.patch", "merged_at": "2022-01-19T15:43...
true
1,108,207,131
3,601
Add conll2003 licensing
closed
[]
2022-01-19T15:00:41
2022-01-19T17:17:28
2022-01-19T17:17:28
Following https://github.com/huggingface/datasets/issues/3582, this PR updates the licensing section of the CoNLL2003 dataset.
lhoestq
https://github.com/huggingface/datasets/pull/3601
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3601", "html_url": "https://github.com/huggingface/datasets/pull/3601", "diff_url": "https://github.com/huggingface/datasets/pull/3601.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3601.patch", "merged_at": "2022-01-19T17:17...
true
1,108,131,878
3,600
Use old url for conll2003
closed
[]
2022-01-19T13:56:49
2022-01-19T14:16:28
2022-01-19T14:16:28
As reported in https://github.com/huggingface/datasets/issues/3582 the CoNLL2003 data files are not available in the master branch of the repo that used to host them. For now we can use the URL from an older commit to access the data files
lhoestq
https://github.com/huggingface/datasets/pull/3600
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3600", "html_url": "https://github.com/huggingface/datasets/pull/3600", "diff_url": "https://github.com/huggingface/datasets/pull/3600.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3600.patch", "merged_at": "2022-01-19T14:16...
true
1,108,111,607
3,599
The `add_column()` method does not work if used on dataset sliced with `select()`
closed
[ "similar #3611 " ]
2022-01-19T13:36:50
2022-01-28T15:35:57
2022-01-28T15:35:57
Hello, I posted this as a question on the forums ([here](https://discuss.huggingface.co/t/add-column-does-not-work-if-used-on-dataset-sliced-with-select/13893)): I have a dataset with 2000 entries > dataset = Dataset.from_dict({'colA': list(range(2000))}) and from which I want to extract the first one thousan...
ThGouzias
https://github.com/huggingface/datasets/issues/3599
null
false
1,108,107,199
3,598
Readme info not being parsed to show on Dataset card page
closed
[ "i suspect a markdown parsing error, @severo do you want to take a quick look at it when you have some time?", "# Problem\r\nThe issue seems to coming from the front matter of the README\r\n```---\r\nannotations_creators:\r\n- no-annotation\r\nlanguage_creators:\r\n- machine-generated\r\nlanguages:\r\n- 'ca'\r\n-...
2022-01-19T13:32:29
2022-01-21T10:20:01
2022-01-21T10:20:01
## Describe the bug The info contained in the README.md file is not being shown in the dataset main page. Basic info and table of contents are properly formatted in the README. ## Steps to reproduce the bug # Sample code to reproduce the bug The README file is this one: https://huggingface.co/datasets/softcatal...
davidcanovas
https://github.com/huggingface/datasets/issues/3598
null
false
1,108,092,864
3,597
ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content
closed
[ "Hi! The `cd` command in Jupyer/Colab needs to start with `%`, so this should work:\r\n```\r\n!git clone https://github.com/huggingface/datasets.git\r\n%cd datasets\r\n!pip install -e \".[streaming]\"\r\n```", "thanks @mariosasko i had the same mistake and your solution is what was needed" ]
2022-01-19T13:19:28
2022-08-05T12:35:51
2022-02-14T08:46:34
## Bug The install of streaming dataset is giving following error. ## Steps to reproduce the bug ```python ! git clone https://github.com/huggingface/datasets.git ! cd datasets ! pip install -e ".[streaming]" ``` ## Actual results Cloning into 'datasets'... remote: Enumerating objects: 50816, done. remot...
amitkml
https://github.com/huggingface/datasets/issues/3597
null
false
1,107,345,338
3,596
Loss of cast `Image` feature on certain dataset method
closed
[ "Hi! Thanks for reporting! The issue with `cast_column` should be fixed by #3575 and after we merge that PR I'll start working on the `push_to_hub` support for the `Image`/`Audio` feature.", "> Hi! Thanks for reporting! The issue with `cast_column` should be fixed by #3575 and after we merge that PR I'll start wo...
2022-01-18T20:44:01
2022-01-21T18:07:28
2022-01-21T18:07:28
## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to an `image`. This also happens when using select on a data...
davanstrien
https://github.com/huggingface/datasets/issues/3596
null
false
1,107,260,527
3,595
Add ImageNet toy datasets from fastai
closed
[ "Thanks for your contribution, @mariosasko. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us i...
2022-01-18T19:03:35
2023-09-24T09:39:07
2022-09-30T14:39:35
Adds the ImageNet toy datasets from FastAI: Imagenette, Imagewoof and Imagewang. TODOs: * [ ] add dummy data * [ ] add dataset card * [ ] generate `dataset_info.json`
mariosasko
https://github.com/huggingface/datasets/pull/3595
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3595", "html_url": "https://github.com/huggingface/datasets/pull/3595", "diff_url": "https://github.com/huggingface/datasets/pull/3595.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3595.patch", "merged_at": null }
true
1,107,174,619
3,594
fix multiple language downloading in mC4
closed
[ "The CI failure is unrelated to your PR and fixed on master, merging :)" ]
2022-01-18T17:25:19
2022-01-19T11:22:57
2022-01-18T19:10:22
If we try to access multiple languages of the [mC4 dataset](https://github.com/huggingface/datasets/tree/master/datasets/mc4), it will throw an error. For example, if we do ```python mc4_subset_two_langs = load_dataset("mc4", languages=["st", "su"]) ``` we got ``` FileNotFoundError: Couldn't find file at https:/...
polinaeterna
https://github.com/huggingface/datasets/pull/3594
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3594", "html_url": "https://github.com/huggingface/datasets/pull/3594", "diff_url": "https://github.com/huggingface/datasets/pull/3594.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3594.patch", "merged_at": "2022-01-18T19:10...
true
1,107,070,852
3,593
Update README.md
closed
[]
2022-01-18T15:52:16
2022-01-20T17:14:53
2022-01-20T17:14:53
Towards license of Tweet Eval parts
borgr
https://github.com/huggingface/datasets/pull/3593
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3593", "html_url": "https://github.com/huggingface/datasets/pull/3593", "diff_url": "https://github.com/huggingface/datasets/pull/3593.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3593.patch", "merged_at": "2022-01-20T17:14...
true
1,107,026,723
3,592
Add QuickDraw dataset
closed
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-01-18T15:13:39
2022-06-09T10:04:54
2022-06-09T09:56:13
Add the QuickDraw dataset. TODOs: * [x] add dummy data * [x] add dataset card * [x] generate `dataset_info.json`
mariosasko
https://github.com/huggingface/datasets/pull/3592
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3592", "html_url": "https://github.com/huggingface/datasets/pull/3592", "diff_url": "https://github.com/huggingface/datasets/pull/3592.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3592.patch", "merged_at": "2022-06-09T09:56...
true
1,106,928,613
3,591
Add support for time, date, duration, and decimal dtypes
closed
[ "Is there a dataset which uses these four datatypes for tests purposes?\r\n", "@severo Not yet. I'll let you know if that changes." ]
2022-01-18T13:46:05
2022-01-31T18:29:34
2022-01-20T17:37:33
Add support for the pyarrow time (maps to `datetime.time` in python), date (maps to `datetime.time` in python), duration (maps to `datetime.timedelta` in python), and decimal (maps to `decimal.decimal` in python) dtypes. This should be helpful when writing scripts for time-series datasets.
mariosasko
https://github.com/huggingface/datasets/pull/3591
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3591", "html_url": "https://github.com/huggingface/datasets/pull/3591", "diff_url": "https://github.com/huggingface/datasets/pull/3591.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3591.patch", "merged_at": "2022-01-20T17:37...
true
1,106,784,860
3,590
Update ANLI README.md
closed
[]
2022-01-18T11:22:53
2022-01-20T16:58:41
2022-01-20T16:58:41
Update license and little things concerning ANLI
borgr
https://github.com/huggingface/datasets/pull/3590
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3590", "html_url": "https://github.com/huggingface/datasets/pull/3590", "diff_url": "https://github.com/huggingface/datasets/pull/3590.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3590.patch", "merged_at": "2022-01-20T16:58...
true
1,106,766,114
3,589
Pin torchmetrics to fix the COMET test
closed
[]
2022-01-18T11:03:49
2022-01-18T11:04:56
2022-01-18T11:04:55
Torchmetrics 0.7.0 got released and has issues with `transformers` (see https://github.com/PyTorchLightning/metrics/issues/770) I'm pinning it to 0.6.0 in the CI, since 0.7.0 makes the COMET metric test fail. COMET requires torchmetrics==0.6.0 anyway.
lhoestq
https://github.com/huggingface/datasets/pull/3589
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3589", "html_url": "https://github.com/huggingface/datasets/pull/3589", "diff_url": "https://github.com/huggingface/datasets/pull/3589.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3589.patch", "merged_at": "2022-01-18T11:04...
true
1,106,749,000
3,588
Update HellaSwag README.md
closed
[]
2022-01-18T10:46:15
2022-01-20T16:57:43
2022-01-20T16:57:43
Adding information from the git repo and paper that were missing
borgr
https://github.com/huggingface/datasets/pull/3588
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3588", "html_url": "https://github.com/huggingface/datasets/pull/3588", "diff_url": "https://github.com/huggingface/datasets/pull/3588.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3588.patch", "merged_at": "2022-01-20T16:57...
true
1,106,719,182
3,587
No module named 'fsspec.archive'
closed
[]
2022-01-18T10:17:01
2022-08-11T09:57:54
2022-01-18T10:33:10
## Describe the bug Cannot import datasets after installation. ## Steps to reproduce the bug ```shell $ python Python 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import datasets Traceback (most recent...
shuuchen
https://github.com/huggingface/datasets/issues/3587
null
false
1,106,455,672
3,586
Revisit `enable/disable_` toggle function prefix
closed
[]
2022-01-18T04:09:55
2022-03-14T15:01:08
2022-03-14T15:01:08
As discussed in https://github.com/huggingface/transformers/pull/15167, we should revisit the `enable/disable_` toggle function prefix, potentially in favor of `set_enabled_`. Concretely, this translates to - De-deprecating `disable_progress_bar()` - Adding `enable_progress_bar()` - On the caching side, adding `en...
jaketae
https://github.com/huggingface/datasets/issues/3586
null
false
1,105,821,470
3,585
Datasets streaming + map doesn't work for `Audio`
closed
[ "This seems related to https://github.com/huggingface/datasets/issues/3505." ]
2022-01-17T12:55:42
2022-01-20T13:28:00
2022-01-20T13:28:00
## Describe the bug When using audio datasets in streaming mode, applying a `map(...)` before iterating leads to an error as the key `array` does not exist anymore. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("common_voice", "en", streaming=True, split="train")...
patrickvonplaten
https://github.com/huggingface/datasets/issues/3585
null
false
1,105,231,768
3,584
https://huggingface.co/datasets/huggingface/transformers-metadata
closed
[]
2022-01-17T00:18:14
2022-02-14T08:51:27
2022-02-14T08:51:27
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
ecankirkic
https://github.com/huggingface/datasets/issues/3584
null
false
1,105,195,144
3,583
Add The Medical Segmentation Decathlon Dataset
open
[ "Hello! I have recently been involved with a medical image segmentation project myself and was going through the `The Medical Segmentation Decathlon Dataset` as well. \r\nI haven't yet had experience adding datasets to this repository yet but would love to get started. Should I take this issue?\r\nIf yes, I've got ...
2022-01-16T21:42:25
2022-03-18T10:44:42
null
## Adding a Dataset - **Name:** *The Medical Segmentation Decathlon Dataset* - **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data, and small objects. - **Paper:*...
omarespejel
https://github.com/huggingface/datasets/issues/3583
null
false
1,104,877,303
3,582
conll 2003 dataset source url is no longer valid
closed
[ "I came to open the same issue.", "Thanks for reporting !\r\n\r\nI pushed a temporary fix on `master` that uses an URL from a previous commit to access the dataset for now, until we have a better solution", "I changed the URL again to use another host, the fix is available on `master` and we'll probably do a ne...
2022-01-15T23:04:17
2022-07-20T13:06:40
2022-01-21T16:57:32
## Describe the bug Loading `conll2003` dataset fails because it was removed (just yesterday 1/14/2022) from the location it is looking for. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("conll2003") ``` ## Expected results The dataset should load. ## Actual r...
rcanand
https://github.com/huggingface/datasets/issues/3582
null
false
1,104,857,822
3,581
Unable to create a dataset from a parquet file in S3
open
[ "Hi ! Currently it only works with local paths, file-like objects are not supported yet" ]
2022-01-15T21:34:16
2022-02-14T08:52:57
null
## Describe the bug Trying to create a dataset from a parquet file in S3. ## Steps to reproduce the bug ```python import s3fs from datasets import Dataset s3 = s3fs.S3FileSystem(anon=False) with s3.open(PATH_LTR_TOY_CLEAN_DATASET, 'rb') as s3file: dataset = Dataset.from_parquet(s3file) ``` ## Expe...
regCode
https://github.com/huggingface/datasets/issues/3581
null
false
1,104,663,242
3,580
Bug in wiki bio load
closed
[ "+1, here's the error I got: \r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>>\r\n>>> load_dataset(\"wiki_bio\")\r\nDownloading: 7.58kB [00:00, 4.42MB/s]\r\nDownloading: 2.71kB [00:00, 1.30MB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset wiki_bio/default (download: 318...
2022-01-15T10:04:33
2022-01-31T08:38:09
2022-01-31T08:38:09
wiki_bio is failing to load because of a failing drive link . Can someone fix this ? ![7E90023B-A3B1-4930-BA25-45CCCB4E1710](https://user-images.githubusercontent.com/3104771/149617870-5a32a2da-2c78-483b-bff6-d7534215a423.png) ![653C1C76-C725-4A04-A0D8-084373BA612F](https://user-images.githubusercontent.com...
tuhinjubcse
https://github.com/huggingface/datasets/issues/3580
null
false
1,103,451,118
3,579
Add Text2log Dataset
closed
[ "The CI fails are unrelated to your PR and fixed on master, I think we can merge now !" ]
2022-01-14T10:45:01
2022-01-20T17:09:44
2022-01-20T17:09:44
Adding the text2log dataset used for training FOL sentence translating models
apergo-ai
https://github.com/huggingface/datasets/pull/3579
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3579", "html_url": "https://github.com/huggingface/datasets/pull/3579", "diff_url": "https://github.com/huggingface/datasets/pull/3579.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3579.patch", "merged_at": "2022-01-20T17:09...
true
1,103,403,287
3,578
label information get lost after parquet serialization
closed
[ "Hi ! We did a release of `datasets` today that may fix this issue. Can you try updating `datasets` and trying again ?\r\n\r\nEDIT: the issue is still there actually\r\n\r\nI think we can fix that by storing the Features in the parquet schema metadata, and then reload them when loading the parquet file", "This in...
2022-01-14T10:10:38
2023-07-25T15:44:53
2023-07-25T15:44:53
## Describe the bug In *dataset_info.json* file, information about the label get lost after the dataset serialization. ## Steps to reproduce the bug ```python from datasets import load_dataset # normal save dataset = load_dataset('glue', 'sst2', split='train') dataset.save_to_disk("normal_save") # save ...
Tudyx
https://github.com/huggingface/datasets/issues/3578
null
false
1,102,598,241
3,577
Add The Mexican Emotional Speech Database (MESD)
open
[]
2022-01-13T23:49:36
2022-01-27T14:14:38
null
## Adding a Dataset - **Name:** *The Mexican Emotional Speech Database (MESD)* - **Description:** *Contains 864 voice recordings with six different prosodies: anger, disgust, fear, happiness, neutral, and sadness. Furthermore, three voice categories are included: female adult, male adult, and child. * - **Paper:** *...
omarespejel
https://github.com/huggingface/datasets/issues/3577
null
false