id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
βŒ€
body
stringlengths
0
228k
βŒ€
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
1,007,217,867
2,969
medical-dialog error
closed
[ "Hi @smeyerhot, thanks for reporting.\r\n\r\nYou are right: there is an issue with the dataset metadata. I'm fixing it.\r\n\r\nIn the meantime, you can circumvent the issue by passing `ignore_verifications=True`:\r\n```python\r\nraw_datasets = load_dataset(\"medical_dialog\", \"en\", split=\"train\", download_mode=...
2021-09-25T23:08:44
2024-01-08T09:55:12
2021-10-11T07:46:42
## Describe the bug A clear and concise description of what the bug is. When I attempt to download the huggingface datatset medical_dialog it errors out midway through ## Steps to reproduce the bug ```python raw_datasets = load_dataset("medical_dialog", "en", split="train", download_mode="force_redownload", data_d...
smeyerhot
https://github.com/huggingface/datasets/issues/2969
null
false
1,007,209,488
2,968
`DatasetDict` cannot be exported to parquet if the splits have different features
closed
[ "This is because you have to specify which split corresponds to what file:\r\n```python\r\ndata_files = {\"train\": \"train/split.parquet\", \"validation\": \"validation/split.parquet\"}\r\nbrand_new_dataset_2 = load_dataset(\"ds\", data_files=data_files)\r\n```\r\n\r\nOtherwise it tries to concatenate the two spli...
2021-09-25T22:18:39
2021-10-07T22:47:42
2021-10-07T22:47:26
## Describe the bug I'm trying to use parquet as a means of serialization for both `Dataset` and `DatasetDict` objects. Using `to_parquet` alongside `from_parquet` or `load_dataset` for a `Dataset` works perfectly. For `DatasetDict`, I use `to_parquet` on each split to save the parquet files in individual folder...
LysandreJik
https://github.com/huggingface/datasets/issues/2968
null
false
1,007,194,837
2,967
Adding vision-and-language datasets (e.g., VQA, VCR) to Datasets
closed
[]
2021-09-25T20:58:15
2021-10-03T20:34:22
2021-10-03T20:34:22
**Is your feature request related to a problem? Please describe.** Would you like to add any vision-and-language datasets (e.g., VQA, VCR) to Huggingface Datasets? **Describe the solution you'd like** N/A **Describe alternatives you've considered** N/A **Additional context** This is Da Yin at UCLA. Recentl...
WadeYin9712
https://github.com/huggingface/datasets/issues/2967
null
false
1,007,142,233
2,966
Upload greek-legal-code dataset
closed
[ "@albertvillanova @lhoestq thank you very much for reviewing! :hugs: \r\n\r\nI 've pushed some updates/changes as requested." ]
2021-09-25T16:52:15
2021-10-13T13:37:30
2021-10-13T13:37:30
null
christospi
https://github.com/huggingface/datasets/pull/2966
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2966", "html_url": "https://github.com/huggingface/datasets/pull/2966", "diff_url": "https://github.com/huggingface/datasets/pull/2966.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2966.patch", "merged_at": "2021-10-13T13:37...
true
1,007,084,153
2,965
Invalid download URL of WMT17 `zh-en` data
closed
[ "Fixed in the current release. Close this issue." ]
2021-09-25T13:17:32
2022-08-31T06:47:11
2022-08-31T06:47:10
## Describe the bug Partial data (wmt17 zh-en) cannot be downloaded due to an invalid URL. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('wmt17','zh-en') ``` ## Expected results ConnectionError: Couldn't reach ftp://cwmt-wmt:[email protected]/pa...
Ririkoo
https://github.com/huggingface/datasets/issues/2965
null
false
1,006,605,904
2,964
Error when calculating Matthews Correlation Coefficient loaded with `load_metric`
closed
[ "After some more tests I've realized that this \"issue\" is due to the `numpy.float64` to `float` conversion, but when defining a function named `compute_metrics` as it follows:\r\n\r\n```python\r\ndef compute_metrics(eval_preds):\r\n metric = load_metric(\"matthews_correlation\")\r\n logits, labels = eval_pr...
2021-09-24T15:55:21
2024-02-16T10:14:35
2021-09-25T08:06:07
## Describe the bug After loading the metric named "[Matthews Correlation Coefficient](https://huggingface.co/metrics/matthews_correlation)" from `πŸ€—datasets`, the `.compute` method fails with the following exception `AttributeError: 'float' object has no attribute 'item'` (complete stack trace can be provided if re...
alvarobartt
https://github.com/huggingface/datasets/issues/2964
null
false
1,006,588,605
2,963
raise TypeError( TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects.
closed
[]
2021-09-24T15:35:11
2021-09-24T15:38:24
2021-09-24T15:38:24
## Describe the bug A clear and concise description of what the bug is. I am trying to use Dataset to load my file in order to use Bert embeddings model baut when I finished loading using dataset and I want to pass to the tokenizer using the function map; I get the following error : raise TypeError( TypeError: Provi...
keloemma
https://github.com/huggingface/datasets/issues/2963
null
false
1,006,557,666
2,962
Enable splits during streaming the dataset
open
[ "For the range splits by percentage over streaming datasets, I used a simple approach [here](https://github.com/huggingface/transformers/pull/39286/files#diff-fb91934a0658db99f30d8c52d92f41d6ab83210134e7f21af8baac6ee65f548fR228) which can be reused to provide things like `[:25%]`, built-in and internally within `da...
2021-09-24T15:01:29
2025-07-17T04:53:20
null
## Describe the Problem I'd like to stream only a specific percentage or part of the dataset. I want to do splitting when I'm streaming dataset as well. ## Solution Enabling splits when `streaming = True` as well. `e.g. dataset = load_dataset('dataset', split='train[:100]', streaming = True)` ## Alternativ...
merveenoyan
https://github.com/huggingface/datasets/issues/2962
null
false
1,006,453,781
2,961
Fix CI doc build
closed
[]
2021-09-24T13:13:28
2021-09-24T13:18:07
2021-09-24T13:18:07
Pin `fsspec`. Before the issue: 'fsspec-2021.8.1', 's3fs-2021.8.1' Generating the issue: 'fsspec-2021.9.0', 's3fs-0.5.1'
albertvillanova
https://github.com/huggingface/datasets/pull/2961
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2961", "html_url": "https://github.com/huggingface/datasets/pull/2961", "diff_url": "https://github.com/huggingface/datasets/pull/2961.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2961.patch", "merged_at": "2021-09-24T13:18...
true
1,006,222,850
2,960
Support pandas 1.3 new `read_csv` parameters
closed
[]
2021-09-24T08:37:24
2021-09-24T11:22:31
2021-09-24T11:22:30
Support two new arguments introduced in pandas v1.3.0: - `encoding_errors` - `on_bad_lines` `read_csv` reference: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
SBrandeis
https://github.com/huggingface/datasets/pull/2960
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2960", "html_url": "https://github.com/huggingface/datasets/pull/2960", "diff_url": "https://github.com/huggingface/datasets/pull/2960.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2960.patch", "merged_at": "2021-09-24T11:22...
true
1,005,547,632
2,959
Added computer vision tasks
closed
[ "Looks great, thanks ! If the 3d ones are really rare we can remove them for now.\r\n\r\nAnd I can see that `object-detection` and `semantic-segmentation` are both task categories (top-level) and task ids (bottom-level). Maybe there's a way to group them and have less granularity for the task categories. For exampl...
2021-09-23T15:07:27
2022-03-01T17:41:51
2022-03-01T17:41:51
Added various image processing/computer vision tasks.
merveenoyan
https://github.com/huggingface/datasets/pull/2959
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2959", "html_url": "https://github.com/huggingface/datasets/pull/2959", "diff_url": "https://github.com/huggingface/datasets/pull/2959.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2959.patch", "merged_at": null }
true
1,005,144,601
2,958
Add security policy to the project
closed
[]
2021-09-23T08:20:55
2021-10-21T15:16:44
2021-10-21T15:16:43
Add security policy to the project, as recommended by GitHub: https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository Close #2953.
albertvillanova
https://github.com/huggingface/datasets/pull/2958
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2958", "html_url": "https://github.com/huggingface/datasets/pull/2958", "diff_url": "https://github.com/huggingface/datasets/pull/2958.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2958.patch", "merged_at": "2021-10-21T15:16...
true
1,004,868,337
2,957
MultiWOZ Dataset NonMatchingChecksumError
closed
[ "Hi Brady! I met the similar issue, it stuck in the downloading stage instead of download anything, maybe it is broken. After I change the downloading from URLs to one url of the [Multiwoz project](https://github.com/budzianowski/multiwoz/archive/44f0f8479f11721831c5591b839ad78827da197b.zip) and use dirs to get sep...
2021-09-22T23:45:00
2022-03-15T16:07:02
2022-03-15T16:07:02
## Describe the bug The checksums for the downloaded MultiWOZ dataset and source MultiWOZ dataset aren't matching. ## Steps to reproduce the bug Both of the below dataset versions yield the checksum error: ```python from datasets import load_dataset dataset = load_dataset('multi_woz_v22', 'v2.2') dataset = loa...
bradyneal
https://github.com/huggingface/datasets/issues/2957
null
false
1,004,306,367
2,956
Cache problem in the `load_dataset` method for local compressed file(s)
open
[ "The problem is still present. \r\nOne solution would be to add the `download_mode=\"force_redownload\"` argument to load_dataset. \r\nHowever, doing so may lead to a `DatasetGenerationError: An error occurred while generating the dataset`. To mitigate, just do:\r\n`rm -r ~/.cache/huggingface/datasets/*`" ]
2021-09-22T13:34:32
2023-08-31T16:49:01
null
## Describe the bug Cache problem in the `load_dataset` method: when modifying a compressed file in a local folder `load_dataset` doesn't detect the change and load the previous version. ## Steps to reproduce the bug To test it directly, I have prepared a [Google Colaboratory notebook](https://colab.research.g...
SaulLu
https://github.com/huggingface/datasets/issues/2956
null
false
1,003,999,469
2,955
Update legacy Python image for CI tests in Linux
closed
[ "There is an exception when running `pip install .[tests]`:\r\n```\r\nProcessing /home/circleci/datasets\r\nCollecting numpy>=1.17 (from datasets==1.12.2.dev0)\r\n Downloading https://files.pythonhosted.org/packages/45/b2/6c7545bb7a38754d63048c7696804a0d947328125d81bf12beaa692c3ae3/numpy-1.19.5-cp36-cp36m-manylinu...
2021-09-22T08:25:27
2021-09-24T10:36:05
2021-09-24T10:36:05
Instead of legacy, use next-generation convenience images, built from the ground up with CI, efficiency, and determinism in mind. Here are some of the highlights: - Faster spin-up time - In Docker terminology, these next-gen images will generally have fewer and smaller layers. Using these new images will lead to fas...
albertvillanova
https://github.com/huggingface/datasets/pull/2955
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2955", "html_url": "https://github.com/huggingface/datasets/pull/2955", "diff_url": "https://github.com/huggingface/datasets/pull/2955.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2955.patch", "merged_at": "2021-09-24T10:36...
true
1,003,904,803
2,954
Run tests in parallel
closed
[ "There is a speed up in Windows machines:\r\n- From `13m 52s` to `11m 10s`\r\n\r\nIn Linux machines, some workers crash with error message:\r\n```\r\nOSError: [Errno 12] Cannot allocate memory\r\n```", "There is also a speed up in Linux machines:\r\n- From `7m 30s` to `5m 32s`" ]
2021-09-22T07:00:44
2021-09-28T06:55:51
2021-09-28T06:55:51
Run CI tests in parallel to speed up the test suite. Speed up results: - Linux: from `7m 30s` to `5m 32s` - Windows: from `13m 52s` to `11m 10s`
albertvillanova
https://github.com/huggingface/datasets/pull/2954
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2954", "html_url": "https://github.com/huggingface/datasets/pull/2954", "diff_url": "https://github.com/huggingface/datasets/pull/2954.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2954.patch", "merged_at": "2021-09-28T06:55...
true
1,002,766,517
2,953
Trying to get in touch regarding a security issue
closed
[ "Hi @JamieSlome,\r\n\r\nThanks for reaching out. Yes, you are right: I'm opening a PR to add the `SECURITY.md` file and a contact method.\r\n\r\nIn the meantime, please feel free to report the security issue to: [email protected]" ]
2021-09-21T15:58:13
2021-10-21T15:16:43
2021-10-21T15:16:43
Hey there! I'd like to report a security issue but cannot find contact instructions on your repository. If not a hassle, might you kindly add a `SECURITY.md` file with an email, or another contact method? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-rep...
JamieSlome
https://github.com/huggingface/datasets/issues/2953
null
false
1,002,704,096
2,952
Fix missing conda deps
closed
[]
2021-09-21T15:23:01
2021-09-22T04:39:59
2021-09-21T15:30:44
`aiohttp` was added as a dependency in #2662 but was missing for the conda build, which causes the 1.12.0 and 1.12.1 to fail. Fix #2932.
lhoestq
https://github.com/huggingface/datasets/pull/2952
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2952", "html_url": "https://github.com/huggingface/datasets/pull/2952", "diff_url": "https://github.com/huggingface/datasets/pull/2952.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2952.patch", "merged_at": "2021-09-21T15:30...
true
1,001,267,888
2,951
Dummy labels no longer on by default in `to_tf_dataset`
closed
[ "@lhoestq Let me make sure we never need it, and if not then I'll remove it entirely in a follow-up PR.", "Thanks ;) it will be less confusing and easier to maintain to not keep unused hacky features" ]
2021-09-20T18:26:59
2021-09-21T14:00:57
2021-09-21T10:14:32
After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway!
Rocketknight1
https://github.com/huggingface/datasets/pull/2951
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2951", "html_url": "https://github.com/huggingface/datasets/pull/2951", "diff_url": "https://github.com/huggingface/datasets/pull/2951.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2951.patch", "merged_at": "2021-09-21T10:14...
true
1,001,085,353
2,950
Fix fn kwargs in filter
closed
[]
2021-09-20T15:10:26
2021-09-20T16:22:59
2021-09-20T15:28:01
#2836 broke the `fn_kwargs` parameter of `filter`, as mentioned in https://github.com/huggingface/datasets/issues/2927 I fixed that and added a test to make sure it doesn't happen again (for either map or filter) Fix #2927
lhoestq
https://github.com/huggingface/datasets/pull/2950
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2950", "html_url": "https://github.com/huggingface/datasets/pull/2950", "diff_url": "https://github.com/huggingface/datasets/pull/2950.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2950.patch", "merged_at": "2021-09-20T15:28...
true
1,001,026,680
2,949
Introduce web and wiki config in triviaqa dataset
closed
[ "I just made the dummy data smaller :)\r\nOnce github refreshes the change I think we can merge !", "Thank you so much for reviewing and accepting my pull request!! :)\r\n\r\nI created these rather large dummy data sets to cover all different cases for the row structure. E.g. in the web configuration, it's possib...
2021-09-20T14:17:23
2021-10-05T13:20:52
2021-10-01T15:39:29
The TriviaQA paper suggests that the two subsets (Wikipedia and Web) should be treated differently. There are also different leaderboards for the two sets on CodaLab. For that reason, introduce additional builder configs in the trivia_qa dataset.
shirte
https://github.com/huggingface/datasets/pull/2949
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2949", "html_url": "https://github.com/huggingface/datasets/pull/2949", "diff_url": "https://github.com/huggingface/datasets/pull/2949.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2949.patch", "merged_at": "2021-10-01T15:39...
true
1,000,844,077
2,948
Fix minor URL format in scitldr dataset
closed
[]
2021-09-20T11:11:32
2021-09-20T13:18:28
2021-09-20T13:18:28
While investigating issue #2918, I found this minor format issues in the URLs (if runned in a Windows machine).
albertvillanova
https://github.com/huggingface/datasets/pull/2948
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2948", "html_url": "https://github.com/huggingface/datasets/pull/2948", "diff_url": "https://github.com/huggingface/datasets/pull/2948.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2948.patch", "merged_at": "2021-09-20T13:18...
true
1,000,798,338
2,947
Don't use old, incompatible cache for the new `filter`
closed
[]
2021-09-20T10:18:59
2021-09-20T16:25:09
2021-09-20T13:43:02
#2836 changed `Dataset.filter` and the resulting data that are stored in the cache are different and incompatible with the ones of the previous `filter` implementation. However the caching mechanism wasn't able to differentiate between the old and the new implementation of filter (only the method name was taken into...
lhoestq
https://github.com/huggingface/datasets/pull/2947
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2947", "html_url": "https://github.com/huggingface/datasets/pull/2947", "diff_url": "https://github.com/huggingface/datasets/pull/2947.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2947.patch", "merged_at": "2021-09-20T13:43...
true
1,000,754,824
2,946
Update meteor score from nltk update
closed
[]
2021-09-20T09:28:46
2021-09-20T09:35:59
2021-09-20T09:35:59
It looks like there were issues in NLTK on the way the METEOR score was computed. A fix was added in NLTK at https://github.com/nltk/nltk/pull/2763, and therefore the scoring function no longer returns the same values. I updated the score of the example in the docs
lhoestq
https://github.com/huggingface/datasets/pull/2946
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2946", "html_url": "https://github.com/huggingface/datasets/pull/2946", "diff_url": "https://github.com/huggingface/datasets/pull/2946.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2946.patch", "merged_at": "2021-09-20T09:35...
true
1,000,624,883
2,945
Protect master branch
closed
[ "Cool, I think we can do both :)", "@lhoestq now the 2 are implemented.\r\n\r\nPlease note that for the the second protection, finally I have chosen to protect the master branch only from **merge commits** (see update comment above), so no need to disable/re-enable the protection on each release (direct commits, ...
2021-09-20T06:47:01
2021-09-20T12:01:27
2021-09-20T12:00:16
After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.: - 00cc036fea7c7745cfe722360036ed306796a3f2 - 13ae8c98602bbad8197de3b9b425f4c78f582af1 - ... I propo...
albertvillanova
https://github.com/huggingface/datasets/issues/2945
null
false
1,000,544,370
2,944
Add `remove_columns` to `IterableDataset `
closed
[ "Hi ! Good idea :)\r\nIf you are interested in contributing, feel free to give it a try and open a Pull Request. Also let me know if I can help you with this or if you have questions" ]
2021-09-20T04:01:00
2021-10-08T15:31:53
2021-10-08T15:31:53
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. ```python from datasets import load_dataset dataset = load_dataset("c4", 'realnewslike', streaming =True, split='train') dataset = dataset.remove_columns('url') ``` ``` AttributeError: 'I...
changjonathanc
https://github.com/huggingface/datasets/issues/2944
null
false
1,000,355,115
2,943
Backwards compatibility broken for cached datasets that use `.filter()`
closed
[ "Hi ! I guess the caching mechanism should have considered the new `filter` to be different from the old one, and don't use cached results from the old `filter`.\r\nTo avoid other users from having this issue we could make the caching differentiate the two, what do you think ?", "If it's easy enough to implement,...
2021-09-19T16:16:37
2021-09-20T16:25:43
2021-09-20T16:25:42
## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='in...
anton-l
https://github.com/huggingface/datasets/issues/2943
null
false
1,000,309,765
2,942
Add SEDE dataset
closed
[ "Thanks @albertvillanova for your great suggestions! I just pushed a new commit with the necessary fixes. For some reason, the test `test_metric_common` failed for `meteor` metric, which doesn't have any connection to this PR, so I'm trying to rebase and see if it helps.", "Hi @Hazoom,\r\n\r\nYou were right: the ...
2021-09-19T13:11:24
2021-09-24T10:39:55
2021-09-24T10:39:54
This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card. Please see our paper for more details: https://arxiv.org/abs/2106.05006
Hazoom
https://github.com/huggingface/datasets/pull/2942
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2942", "html_url": "https://github.com/huggingface/datasets/pull/2942", "diff_url": "https://github.com/huggingface/datasets/pull/2942.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2942.patch", "merged_at": "2021-09-24T10:39...
true
1,000,000,711
2,941
OSCAR unshuffled_original_ko: NonMatchingSplitsSizesError
open
[ "I tried `unshuffled_original_da` and it is also not working" ]
2021-09-18T10:39:13
2022-01-19T14:10:07
null
## Describe the bug Cannot download OSCAR `unshuffled_original_ko` due to `NonMatchingSplitsSizesError`. ## Steps to reproduce the bug ```python >>> dataset = datasets.load_dataset('oscar', 'unshuffled_original_ko') NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=25292102197, num...
ayaka14732
https://github.com/huggingface/datasets/issues/2941
null
false
999,680,796
2,940
add swedish_medical_ner dataset
closed
[]
2021-09-17T20:03:05
2021-10-05T12:13:34
2021-10-05T12:13:33
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
bwang482
https://github.com/huggingface/datasets/pull/2940
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2940", "html_url": "https://github.com/huggingface/datasets/pull/2940", "diff_url": "https://github.com/huggingface/datasets/pull/2940.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2940.patch", "merged_at": "2021-10-05T12:13...
true
999,639,630
2,939
MENYO-20k repo has moved, updating URL
closed
[]
2021-09-17T19:01:54
2021-09-21T15:31:37
2021-09-21T15:31:36
Dataset repo moved to https://github.com/uds-lsv/menyo-20k_MT, now editing URL to match. https://github.com/uds-lsv/menyo-20k_MT/blob/master/data/train.tsv is the file we're looking for
cdleong
https://github.com/huggingface/datasets/pull/2939
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2939", "html_url": "https://github.com/huggingface/datasets/pull/2939", "diff_url": "https://github.com/huggingface/datasets/pull/2939.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2939.patch", "merged_at": "2021-09-21T15:31...
true
999,552,263
2,938
Take namespace into account in caching
closed
[ "We might have collisions if a username and a dataset_name are the same. Maybe instead serialize the dataset name by replacing `/` with some string, eg `__SLASH__`, that will hopefully never appear in a dataset or user name (it's what I did in https://github.com/huggingface/datasets-preview-backend/blob/master/benc...
2021-09-17T16:57:33
2021-12-17T10:52:18
2021-09-29T13:01:31
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing. I...
lhoestq
https://github.com/huggingface/datasets/pull/2938
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2938", "html_url": "https://github.com/huggingface/datasets/pull/2938", "diff_url": "https://github.com/huggingface/datasets/pull/2938.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2938.patch", "merged_at": "2021-09-29T13:01...
true
999,548,277
2,937
load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
closed
[ "Hi @daqieq, thanks for reporting.\r\n\r\nUnfortunately, I was not able to reproduce this bug:\r\n```ipython\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset('wiki_bio')\r\nDownloading: 7.58kB [00:00, 26.3kB/s]\r\nDownloading: 2.71kB [00:00, ?B/s]\r\nUsing custom data configuration default\...
2021-09-17T16:52:10
2022-08-24T13:09:08
2022-08-24T13:09:08
## Describe the bug Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('wiki_bio') ``` ## Expected results It is expected that the dataset downloads without any er...
daqieq
https://github.com/huggingface/datasets/issues/2937
null
false
999,521,647
2,936
Check that array is not Float as nan != nan
closed
[]
2021-09-17T16:16:41
2021-09-21T09:39:05
2021-09-21T09:39:04
The Exception wants to check for issues with StructArrays/ListArrays but catches FloatArrays with value nan as nan != nan. Pass on FloatArrays as we should not raise an Exception for them.
Iwontbecreative
https://github.com/huggingface/datasets/pull/2936
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2936", "html_url": "https://github.com/huggingface/datasets/pull/2936", "diff_url": "https://github.com/huggingface/datasets/pull/2936.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2936.patch", "merged_at": "2021-09-21T09:39...
true
999,518,469
2,935
Add Jigsaw unintended Bias
closed
[ "Note that the tests seem to fail because of a bug in an Exception at the moment, see: https://github.com/huggingface/datasets/pull/2936 for the fix", "@lhoestq implemented your changes, I think this might be ready for another look.", "Thanks @lhoestq, implemented the changes, let me know if anything else pops ...
2021-09-17T16:12:31
2021-09-24T10:41:52
2021-09-24T10:41:52
Hi, Here's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff. This requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there.
Iwontbecreative
https://github.com/huggingface/datasets/pull/2935
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2935", "html_url": "https://github.com/huggingface/datasets/pull/2935", "diff_url": "https://github.com/huggingface/datasets/pull/2935.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2935.patch", "merged_at": "2021-09-24T10:41...
true
999,477,413
2,934
to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows
closed
[ "I did some investigation and, as it seems, the bug stems from [this line](https://github.com/huggingface/datasets/blob/8004d7c3e1d74b29c3e5b0d1660331cd26758363/src/datasets/arrow_dataset.py#L325). The lifecycle of the dataset from the linked line is bound to one of the returned `tf.data.Dataset`. So my (hacky) sol...
2021-09-17T15:26:53
2021-10-13T09:03:23
2021-10-13T09:03:23
To reproduce: ```python import datasets as ds import weakref import gc d = ds.load_dataset("mnist", split="train") ref = weakref.ref(d._data.table) tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label") del tfd, d gc.collect() assert ref() is None, "Error: there is at least one refe...
lhoestq
https://github.com/huggingface/datasets/issues/2934
null
false
999,392,566
2,933
Replace script_version with revision
closed
[ "I'm also fine with the removal in 1.15" ]
2021-09-17T14:04:39
2021-09-20T09:52:10
2021-09-20T09:52:10
As discussed in https://github.com/huggingface/datasets/pull/2718#discussion_r707013278, the parameter name `script_version` is no longer applicable to datasets without loading script (i.e., datasets only with raw data files). This PR replaces the parameter name `script_version` with `revision`. This way, we are ...
albertvillanova
https://github.com/huggingface/datasets/pull/2933
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2933", "html_url": "https://github.com/huggingface/datasets/pull/2933", "diff_url": "https://github.com/huggingface/datasets/pull/2933.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2933.patch", "merged_at": "2021-09-20T09:52...
true
999,317,750
2,932
Conda build fails
closed
[ "Why 1.9 ?\r\n\r\nhttps://anaconda.org/HuggingFace/datasets currently says 1.11", "Alright I added 1.12.0 and 1.12.1 and fixed the conda build #2952 " ]
2021-09-17T12:49:22
2021-09-21T15:31:10
2021-09-21T15:31:10
## Describe the bug Current `datasets` version in conda is 1.9 instead of 1.12. The build of the conda package fails.
albertvillanova
https://github.com/huggingface/datasets/issues/2932
null
false
998,326,359
2,931
Fix bug in to_tf_dataset
closed
[ "I'm going to merge it, but yeah - hopefully the CI runner just cleans that up automatically and few other people run the tests on Windows anyway!" ]
2021-09-16T15:08:03
2021-09-16T17:01:38
2021-09-16T17:01:37
Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()`
Rocketknight1
https://github.com/huggingface/datasets/pull/2931
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2931", "html_url": "https://github.com/huggingface/datasets/pull/2931", "diff_url": "https://github.com/huggingface/datasets/pull/2931.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2931.patch", "merged_at": "2021-09-16T17:01...
true
998,154,311
2,930
Mutable columns argument breaks set_format
closed
[ "Pushed a fix to my branch #2731 " ]
2021-09-16T12:27:22
2021-09-16T13:50:53
2021-09-16T13:50:53
## Describe the bug If you pass a mutable list to the `columns` argument of `set_format` and then change the list afterwards, the returned columns also change. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("glue", "cola") column_list = ["idx", "label"] datas...
Rocketknight1
https://github.com/huggingface/datasets/issues/2930
null
false
997,960,024
2,929
Add regression test for null Sequence
closed
[]
2021-09-16T08:58:33
2021-09-17T08:23:59
2021-09-17T08:23:59
Relates to #2892 and #2900.
albertvillanova
https://github.com/huggingface/datasets/pull/2929
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2929", "html_url": "https://github.com/huggingface/datasets/pull/2929", "diff_url": "https://github.com/huggingface/datasets/pull/2929.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2929.patch", "merged_at": "2021-09-17T08:23...
true
997,941,506
2,928
Update BibTeX entry
closed
[]
2021-09-16T08:39:20
2021-09-16T12:35:34
2021-09-16T12:35:34
Update BibTeX entry.
albertvillanova
https://github.com/huggingface/datasets/pull/2928
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2928", "html_url": "https://github.com/huggingface/datasets/pull/2928", "diff_url": "https://github.com/huggingface/datasets/pull/2928.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2928.patch", "merged_at": "2021-09-16T12:35...
true
997,654,680
2,927
Datasets 1.12 dataset.filter TypeError: get_indices_from_mask_function() got an unexpected keyword argument
closed
[ "Thanks for reporting, I'm looking into it :)", "Fixed by #2950." ]
2021-09-16T01:14:02
2021-09-20T16:23:22
2021-09-20T16:23:21
## Describe the bug Upgrading to 1.12 caused `dataset.filter` call to fail with > get_indices_from_mask_function() got an unexpected keyword argument valid_rel_labels ## Steps to reproduce the bug ```pythondef filter_good_rows( ex: Dict, valid_rel_labels: Set[str], valid_ner_labels: Set[st...
timothyjlaurent
https://github.com/huggingface/datasets/issues/2927
null
false
997,463,277
2,926
Error when downloading datasets to non-traditional cache directories
open
[ "Same here !" ]
2021-09-15T19:59:46
2021-11-24T21:42:31
null
## Describe the bug When the cache directory is linked (soft link) to a directory on a NetApp device, the download fails. ## Steps to reproduce the bug ```bash ln -s /path/to/netapp/.cache ~/.cache ``` ```python load_dataset("imdb") ``` ## Expected results Successfully loading IMDB dataset ## Actual...
dar-tau
https://github.com/huggingface/datasets/issues/2926
null
false
997,407,034
2,925
Add tutorial for no-code dataset upload
closed
[ "Cool, love it ! :)\r\n\r\nFeel free to add a paragraph saying how to load the dataset:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"stevhliu/demo\")\r\n\r\n# or to separate each csv file into several splits\r\ndata_files = {\"train\": \"train.csv\", \"test\": \"test.csv\"}\r\nd...
2021-09-15T18:54:42
2021-09-27T17:51:55
2021-09-27T17:51:55
This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dat...
stevhliu
https://github.com/huggingface/datasets/pull/2925
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2925", "html_url": "https://github.com/huggingface/datasets/pull/2925", "diff_url": "https://github.com/huggingface/datasets/pull/2925.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2925.patch", "merged_at": "2021-09-27T17:51...
true
997,378,113
2,924
"File name too long" error for file locks
closed
[ "Hi, the filename here is less than 255\r\n```python\r\n>>> len(\"_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock\")\r\n154\r\n```\r\nso not sure why it's considered too long for your filesystem.\r\n(also note...
2021-09-15T18:16:50
2023-12-08T13:39:51
2021-10-29T09:42:24
## Describe the bug Getting the following error when calling `load_dataset("gar1t/test")`: ``` OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.inc...
gar1t
https://github.com/huggingface/datasets/issues/2924
null
false
997,351,590
2,923
Loading an autonlp dataset raises in normal mode but not in streaming mode
closed
[ "Closing since autonlp dataset are now supported" ]
2021-09-15T17:44:38
2022-04-12T10:09:40
2022-04-12T10:09:39
## Describe the bug The same dataset (from autonlp) raises an error in normal mode, but does not raise in streaming mode ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=False) ## raises an err...
severo
https://github.com/huggingface/datasets/issues/2923
null
false
997,332,662
2,922
Fix conversion of multidim arrays in list to arrow
closed
[]
2021-09-15T17:21:36
2021-09-15T17:22:52
2021-09-15T17:21:45
Arrow only supports 1-dim arrays. Previously we were converting all the numpy arrays to python list before instantiating arrow arrays to workaround this limitation. However in #2361 we started to keep numpy arrays in order to keep their dtypes. It works when we pass any multi-dim numpy array (the conversion to arrow ...
lhoestq
https://github.com/huggingface/datasets/pull/2922
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2922", "html_url": "https://github.com/huggingface/datasets/pull/2922", "diff_url": "https://github.com/huggingface/datasets/pull/2922.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2922.patch", "merged_at": "2021-09-15T17:21...
true
997,325,424
2,921
Using a list of multi-dim numpy arrays raises an error "can only convert 1-dimensional array values"
closed
[]
2021-09-15T17:12:11
2021-09-15T17:21:45
2021-09-15T17:21:45
This error has been introduced in https://github.com/huggingface/datasets/pull/2361 To reproduce: ```python import numpy as np from datasets import Dataset d = Dataset.from_dict({"a": [np.zeros((2, 2))]}) ``` raises ```python Traceback (most recent call last): File "playground/ttest.py", line 5, in <mod...
lhoestq
https://github.com/huggingface/datasets/issues/2921
null
false
997,323,014
2,920
Fix unwanted tqdm bar when accessing examples
closed
[]
2021-09-15T17:09:11
2021-09-15T17:18:24
2021-09-15T17:18:24
A change in #2814 added bad progress bars in `map_nested`. Now they're disabled by default Fix #2919
lhoestq
https://github.com/huggingface/datasets/pull/2920
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2920", "html_url": "https://github.com/huggingface/datasets/pull/2920", "diff_url": "https://github.com/huggingface/datasets/pull/2920.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2920.patch", "merged_at": "2021-09-15T17:18...
true
997,127,487
2,919
Unwanted progress bars when accessing examples
closed
[ "doing a patch release now :)" ]
2021-09-15T14:05:10
2021-09-15T17:21:49
2021-09-15T17:18:23
When accessing examples from a dataset formatted for pytorch, some progress bars appear when accessing examples: ```python In [1]: import datasets as ds In [2]: d = ds.Dataset.from_dict({"a": [0, 1, 2]}).with_format("torch") ...
lhoestq
https://github.com/huggingface/datasets/issues/2919
null
false
997,063,347
2,918
`Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
closed
[ "Hi @SBrandeis, thanks for reporting! ^^\r\n\r\nI think this is an issue with `fsspec`: https://github.com/intake/filesystem_spec/issues/389\r\n\r\nI will ask them if they are planning to fix it...", "Code to reproduce the bug: `ClientPayloadError: 400, message='Can not decode content-encoding: gzip'`\r\n```pytho...
2021-09-15T13:06:07
2021-12-01T08:15:00
2021-12-01T08:15:00
## Describe the bug Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`: ```python ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` cc @lhoestq ## Steps to reproduce the bug ```python from datasets import load_...
SBrandeis
https://github.com/huggingface/datasets/issues/2918
null
false
997,041,658
2,917
windows download abnormal
closed
[ "Hi ! Is there some kind of proxy that is configured in your browser that gives you access to internet ? If it's the case it could explain why it doesn't work in the code, since the proxy wouldn't be used", "It is indeed an agency problem, thank you very, very much", "Let me know if you have other questions :)\...
2021-09-15T12:45:35
2021-09-16T17:17:48
2021-09-16T17:17:48
## Describe the bug The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why?? ## Steps to reproduce the bug ```python3.7 + windows ![image](https://user-images.githubusercontent.com/52347799/133436174-43...
wei1826676931
https://github.com/huggingface/datasets/issues/2917
null
false
997,003,661
2,916
Add OpenAI's pass@k code evaluation metric
closed
[ "> The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in datasets?\r\n\r\nIt should work normally, but feel free to test it.\r\nThere is some documentation about using metrics in a distributed setup that uses multiprocessi...
2021-09-15T12:05:43
2021-11-12T14:19:51
2021-11-12T14:19:50
This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references`...
lvwerra
https://github.com/huggingface/datasets/pull/2916
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2916", "html_url": "https://github.com/huggingface/datasets/pull/2916", "diff_url": "https://github.com/huggingface/datasets/pull/2916.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2916.patch", "merged_at": "2021-11-12T14:19...
true
996,870,071
2,915
Fix fsspec AbstractFileSystem access
closed
[]
2021-09-15T09:39:20
2021-09-15T11:35:24
2021-09-15T11:35:24
This addresses the issue from #2914 by changing the way fsspec's AbstractFileSystem is accessed.
pierre-godard
https://github.com/huggingface/datasets/pull/2915
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2915", "html_url": "https://github.com/huggingface/datasets/pull/2915", "diff_url": "https://github.com/huggingface/datasets/pull/2915.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2915.patch", "merged_at": "2021-09-15T11:35...
true
996,770,168
2,914
Having a dependency defining fsspec entrypoint raises an AttributeError when importing datasets
closed
[ "Closed by #2915." ]
2021-09-15T07:54:06
2021-09-15T16:49:17
2021-09-15T16:49:16
## Describe the bug In one of my project, I defined a custom fsspec filesystem with an entrypoint. My guess is that by doing so, a variable named `spec` is created in the module `fsspec` (created by entering a for loop as there are entrypoints defined, see the loop in question [here](https://github.com/intake/filesys...
pierre-godard
https://github.com/huggingface/datasets/issues/2914
null
false
996,436,368
2,913
timit_asr dataset only includes one text phrase
closed
[ "Hi @margotwagner, \r\nThis bug was fixed in #1995. Upgrading the datasets should work (min v1.8.0 ideally)", "Hi @margotwagner,\r\n\r\nYes, as @bhavitvyamalik has commented, this bug was fixed in `datasets` version 1.5.0. You need to update it, as your current version is 1.4.1:\r\n> Environment info\r\n> - `data...
2021-09-14T21:06:07
2021-09-15T08:05:19
2021-09-15T08:05:18
## Describe the bug The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases. ## Steps to reproduce the bug Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-englis...
margotwagner
https://github.com/huggingface/datasets/issues/2913
null
false
996,256,005
2,912
Update link to Blog in docs footer
closed
[]
2021-09-14T17:23:14
2021-09-15T07:59:23
2021-09-15T07:59:23
Update link.
albertvillanova
https://github.com/huggingface/datasets/pull/2912
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2912", "html_url": "https://github.com/huggingface/datasets/pull/2912", "diff_url": "https://github.com/huggingface/datasets/pull/2912.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2912.patch", "merged_at": "2021-09-15T07:59...
true
996,202,598
2,911
Fix exception chaining
closed
[]
2021-09-14T16:19:29
2021-09-16T15:04:44
2021-09-16T15:04:44
Fix exception chaining to avoid tracebacks with message: `During handling of the above exception, another exception occurred:`
albertvillanova
https://github.com/huggingface/datasets/pull/2911
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2911", "html_url": "https://github.com/huggingface/datasets/pull/2911", "diff_url": "https://github.com/huggingface/datasets/pull/2911.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2911.patch", "merged_at": "2021-09-16T15:04...
true
996,149,632
2,910
feat: 🎸 pass additional arguments to get private configs + info
closed
[ "Included in https://github.com/huggingface/datasets/pull/2906" ]
2021-09-14T15:24:19
2021-09-15T16:19:09
2021-09-15T16:19:06
`use_auth_token` can now be passed to the functions to get the configs or infos of private datasets on the hub
severo
https://github.com/huggingface/datasets/pull/2910
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2910", "html_url": "https://github.com/huggingface/datasets/pull/2910", "diff_url": "https://github.com/huggingface/datasets/pull/2910.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2910.patch", "merged_at": null }
true
996,002,180
2,909
fix anli splits
closed
[]
2021-09-14T13:10:35
2021-10-13T11:27:49
2021-10-13T11:27:49
I can't run the tests for dummy data, facing this error `ImportError while loading conftest '/home/zaid/tmp/fix_anli_splits/datasets/tests/conftest.py'. tests/conftest.py:10: in <module> from datasets import config E ImportError: cannot import name 'config' from 'datasets' (unknown location)`
zaidalyafeai
https://github.com/huggingface/datasets/pull/2909
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2909", "html_url": "https://github.com/huggingface/datasets/pull/2909", "diff_url": "https://github.com/huggingface/datasets/pull/2909.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2909.patch", "merged_at": null }
true
995,970,612
2,908
Update Zenodo metadata with creator names and affiliation
closed
[]
2021-09-14T12:39:37
2021-09-14T14:29:25
2021-09-14T14:29:25
This PR helps in prefilling author data when automatically generating the DOI after each release.
albertvillanova
https://github.com/huggingface/datasets/pull/2908
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2908", "html_url": "https://github.com/huggingface/datasets/pull/2908", "diff_url": "https://github.com/huggingface/datasets/pull/2908.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2908.patch", "merged_at": "2021-09-14T14:29...
true
995,968,152
2,907
add story_cloze dataset
closed
[ "Will create a new one, this one seems to be missed up. " ]
2021-09-14T12:36:53
2021-10-08T21:41:42
2021-10-08T21:41:41
@lhoestq I have spent some time but I still I can't succeed in correctly testing the dummy_data.
zaidalyafeai
https://github.com/huggingface/datasets/pull/2907
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2907", "html_url": "https://github.com/huggingface/datasets/pull/2907", "diff_url": "https://github.com/huggingface/datasets/pull/2907.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2907.patch", "merged_at": null }
true
995,962,905
2,906
feat: 🎸 add a function to get a dataset config's split names
closed
[ "> Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)\r\n\r\nYes totally :) This tutorial should indeed mention this, given how fundamental it is" ]
2021-09-14T12:31:22
2021-10-04T09:55:38
2021-10-04T09:55:37
Also: pass additional arguments (use_auth_token) to get private configs + info of private datasets on the hub Questions: - [x] I'm not sure how the versions work: I changed 1.12.1.dev0 to 1.12.1.dev1, was it correct? -> no: reverted - [x] Should I add a section in https://github.com/huggingface/datasets/blo...
severo
https://github.com/huggingface/datasets/pull/2906
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2906", "html_url": "https://github.com/huggingface/datasets/pull/2906", "diff_url": "https://github.com/huggingface/datasets/pull/2906.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2906.patch", "merged_at": "2021-10-04T09:55...
true
995,843,964
2,905
Update BibTeX entry
closed
[]
2021-09-14T10:16:17
2021-09-14T12:25:37
2021-09-14T12:25:37
Update BibTeX entry.
albertvillanova
https://github.com/huggingface/datasets/pull/2905
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2905", "html_url": "https://github.com/huggingface/datasets/pull/2905", "diff_url": "https://github.com/huggingface/datasets/pull/2905.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2905.patch", "merged_at": "2021-09-14T12:25...
true
995,814,222
2,904
FORCE_REDOWNLOAD does not work
open
[ "Hi ! Thanks for reporting. The error seems to happen only if you use compressed files.\r\n\r\nThe second dataset is prepared in another dataset cache directory than the first - which is normal, since the source file is different. However, it doesn't uncompress the new data file because it finds the old uncompresse...
2021-09-14T09:45:26
2021-10-06T09:37:19
null
## Describe the bug With GenerateMode.FORCE_REDOWNLOAD, the documentation says +------------------------------------+-----------+---------+ | | Downloads | Dataset | +====================================+===========+=========+ | `REUSE_DATASET_IF_EXISTS` (default...
anoopkatti
https://github.com/huggingface/datasets/issues/2904
null
false
995,715,191
2,903
Fix xpathopen to accept positional arguments
closed
[ "thanks!" ]
2021-09-14T08:02:50
2021-09-14T08:51:21
2021-09-14T08:40:47
Fix `xpathopen()` so that it also accepts positional arguments. Fix #2901.
albertvillanova
https://github.com/huggingface/datasets/pull/2903
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2903", "html_url": "https://github.com/huggingface/datasets/pull/2903", "diff_url": "https://github.com/huggingface/datasets/pull/2903.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2903.patch", "merged_at": "2021-09-14T08:40...
true
995,254,216
2,902
Add WIT Dataset
closed
[ "@hassiahk is working on it #2810 ", "WikiMedia is now hosting the pixel values directly which should make it a lot easier!\r\nThe files can be found here:\r\nhttps://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/\r\nhttps://analyti...
2021-09-13T19:38:49
2024-10-02T15:37:48
2022-06-01T17:28:40
## Adding a Dataset - **Name:** *WIT* - **Description:** *Wikipedia-based Image Text Dataset* - **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning ](https://arxiv.org/abs/2103.01913)* - **Data:** *https://github.com/google-research-datasets/wit* - **Motivation:** (e...
nateraw
https://github.com/huggingface/datasets/issues/2902
null
false
995,232,844
2,901
Incompatibility with pytest
closed
[ "Sorry, my bad... When implementing `xpathopen`, I just considered the use case in the COUNTER dataset... I'm fixing it!" ]
2021-09-13T19:12:17
2021-09-14T08:40:47
2021-09-14T08:40:47
## Describe the bug pytest complains about xpathopen / path.open("w") ## Steps to reproduce the bug Create a test file, `test.py`: ```python import datasets as ds def load_dataset(): ds.load_dataset("counter", split="train", streaming=True) ``` And launch it with pytest: ```bash python -m pyt...
severo
https://github.com/huggingface/datasets/issues/2901
null
false
994,922,580
2,900
Fix null sequence encoding
closed
[]
2021-09-13T13:55:08
2021-09-13T14:17:43
2021-09-13T14:17:42
The Sequence feature encoding was failing when a `None` sequence was used in a dataset. Fix https://github.com/huggingface/datasets/issues/2892
lhoestq
https://github.com/huggingface/datasets/pull/2900
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2900", "html_url": "https://github.com/huggingface/datasets/pull/2900", "diff_url": "https://github.com/huggingface/datasets/pull/2900.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2900.patch", "merged_at": "2021-09-13T14:17...
true
994,082,432
2,899
Dataset
closed
[]
2021-09-12T07:38:53
2021-09-12T16:12:15
2021-09-12T16:12:15
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
rcacho172
https://github.com/huggingface/datasets/issues/2899
null
false
994,032,814
2,898
Hug emoji
closed
[]
2021-09-12T03:27:51
2021-09-12T16:13:13
2021-09-12T16:13:13
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
Jackg-08
https://github.com/huggingface/datasets/issues/2898
null
false
993,798,386
2,897
Add OpenAI's HumanEval dataset
closed
[ "I just fixed the class name, and added `[More Information Needed]` in empty sections in case people want to complete the dataset card :)" ]
2021-09-11T09:37:47
2021-09-16T15:02:11
2021-09-16T15:02:11
This PR adds OpenAI's [HumanEval](https://github.com/openai/human-eval) dataset. The dataset consists of 164 handcrafted programming problems with solutions and unittests to verify solution. This dataset is useful to evaluate code generation models.
lvwerra
https://github.com/huggingface/datasets/pull/2897
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2897", "html_url": "https://github.com/huggingface/datasets/pull/2897", "diff_url": "https://github.com/huggingface/datasets/pull/2897.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2897.patch", "merged_at": "2021-09-16T15:02...
true
993,613,113
2,896
add multi-proc in `to_csv`
closed
[ "I think you can just add a test `test_dataset_to_csv_multiproc` in `tests/io/test_csv.py` and we'll be good", "Hi @lhoestq, \r\nI've added `test_dataset_to_csv` apart from `test_dataset_to_csv_multiproc` as no test was there to check generated CSV file when `num_proc=1`. Please let me know if anything is also re...
2021-09-10T21:35:09
2021-10-28T05:47:33
2021-10-26T16:00:42
This PR extends the multi-proc method used in #2747 for`to_json` to `to_csv` as well. Results on my machine post benchmarking on `ascent_kb` dataset (giving ~45% improvement when compared to num_proc = 1): ``` Time taken on 1 num_proc, 10000 batch_size 674.2055702209473 Time taken on 4 num_proc, 10000 batch_siz...
bhavitvyamalik
https://github.com/huggingface/datasets/pull/2896
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2896", "html_url": "https://github.com/huggingface/datasets/pull/2896", "diff_url": "https://github.com/huggingface/datasets/pull/2896.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2896.patch", "merged_at": "2021-10-26T16:00...
true
993,462,274
2,895
Use pyarrow.Table.replace_schema_metadata instead of pyarrow.Table.cast
closed
[]
2021-09-10T17:56:57
2021-09-21T22:50:01
2021-09-21T08:18:35
This PR partially addresses #2252. ``update_metadata_with_features`` uses ``Table.cast`` which slows down ``load_from_disk`` (and possibly other methods that use it) for very large datasets. Since ``update_metadata_with_features`` is only updating the schema metadata, it makes more sense to use ``pyarrow.Table.repla...
arsarabi
https://github.com/huggingface/datasets/pull/2895
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2895", "html_url": "https://github.com/huggingface/datasets/pull/2895", "diff_url": "https://github.com/huggingface/datasets/pull/2895.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2895.patch", "merged_at": "2021-09-21T08:18...
true
993,375,654
2,894
Fix COUNTER dataset
closed
[]
2021-09-10T16:07:29
2021-09-10T16:27:45
2021-09-10T16:27:44
Fix filename generating `FileNotFoundError`. Related to #2866. CC: @severo.
albertvillanova
https://github.com/huggingface/datasets/pull/2894
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2894", "html_url": "https://github.com/huggingface/datasets/pull/2894", "diff_url": "https://github.com/huggingface/datasets/pull/2894.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2894.patch", "merged_at": "2021-09-10T16:27...
true
993,342,781
2,893
add mbpp dataset
closed
[ "I think it's fine to have the original schema" ]
2021-09-10T15:27:30
2021-09-16T09:35:42
2021-09-16T09:35:42
This PR adds the mbpp dataset introduced by Google [here](https://github.com/google-research/google-research/tree/master/mbpp) as mentioned in #2816. The dataset contain two versions: a full and a sanitized one. They have a slightly different schema and it is current state the loading preserves the original schema. ...
lvwerra
https://github.com/huggingface/datasets/pull/2893
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2893", "html_url": "https://github.com/huggingface/datasets/pull/2893", "diff_url": "https://github.com/huggingface/datasets/pull/2893.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2893.patch", "merged_at": "2021-09-16T09:35...
true
993,274,572
2,892
Error when encoding a dataset with None objects with a Sequence feature
closed
[ "This has been fixed by https://github.com/huggingface/datasets/pull/2900\r\nWe're doing a new release 1.12 today to make the fix available :)" ]
2021-09-10T14:11:43
2021-09-13T14:18:13
2021-09-13T14:17:42
There is an error when encoding a dataset with None objects with a Sequence feature To reproduce: ```python from datasets import Dataset, Features, Value, Sequence data = {"a": [[0], None]} features = Features({"a": Sequence(Value("int32"))}) dataset = Dataset.from_dict(data, features=features) ``` raises ...
lhoestq
https://github.com/huggingface/datasets/issues/2892
null
false
993,161,984
2,891
Allow dynamic first dimension for ArrayXD
closed
[ "@lhoestq, thanks for your review.\r\n\r\nI added test for `to_pylist`, I didn't do that for `to_numpy` because this method shouldn't be called for dynamic dimension ArrayXD - this method will try to make a single numpy array for the whole column which cannot be done for dynamic arrays.\r\n\r\nI dig into `to_pandas...
2021-09-10T11:52:52
2021-11-23T15:33:13
2021-10-29T09:37:17
Add support for dynamic first dimension for ArrayXD features. See issue [#887](https://github.com/huggingface/datasets/issues/887). Following changes allow for `to_pylist` method of `ArrayExtensionArray` to return a list of numpy arrays where fist dimension can vary. @lhoestq Could you suggest how you want to exten...
rpowalski
https://github.com/huggingface/datasets/pull/2891
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2891", "html_url": "https://github.com/huggingface/datasets/pull/2891", "diff_url": "https://github.com/huggingface/datasets/pull/2891.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2891.patch", "merged_at": "2021-10-29T09:37...
true
993,074,102
2,890
0x290B112ED1280537B24Ee6C268a004994a16e6CE
closed
[]
2021-09-10T09:51:17
2021-09-10T11:45:29
2021-09-10T11:45:29
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
rcacho172
https://github.com/huggingface/datasets/issues/2890
null
false
992,968,382
2,889
Coc
closed
[]
2021-09-10T07:32:07
2021-09-10T11:45:54
2021-09-10T11:45:54
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
Bwiggity
https://github.com/huggingface/datasets/issues/2889
null
false
992,676,535
2,888
v1.11.1 release date
closed
[ "Hi ! Probably 1.12 on monday :)\r\n", "@albertvillanova i think this issue is still valid and should not be closed till `>1.11.0` is published :)" ]
2021-09-09T21:53:15
2021-09-12T20:18:35
2021-09-12T16:15:39
Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago. When do you plan to publush v1.11.1 release?
fcakyon
https://github.com/huggingface/datasets/issues/2888
null
false
992,576,305
2,887
#2837 Use cache folder for lockfile
closed
[ "The CI fail about the meteor metric is unrelated to this PR " ]
2021-09-09T19:55:56
2021-10-05T17:58:22
2021-10-05T17:58:22
Fixes #2837 Use a cache folder directory to store the FileLock. The issue was that the lock file was in a readonly folder.
Dref360
https://github.com/huggingface/datasets/pull/2887
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2887", "html_url": "https://github.com/huggingface/datasets/pull/2887", "diff_url": "https://github.com/huggingface/datasets/pull/2887.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2887.patch", "merged_at": "2021-10-05T17:58...
true
992,534,632
2,886
Hj
closed
[]
2021-09-09T18:58:52
2021-09-10T11:46:29
2021-09-10T11:46:29
null
Noorasri
https://github.com/huggingface/datasets/issues/2886
null
false
992,160,544
2,885
Adding an Elastic Search index to a Dataset
open
[ "Hi, is this bug deterministic in your poetry env ? I mean, does it always stop at 90% or is it random ?\r\n\r\nAlso, can you try using another version of Elasticsearch ? Maybe there's an issue with the one of you poetry env", "I face similar issue with oscar dataset on remote ealsticsearch instance. It was mainl...
2021-09-09T12:21:39
2021-10-20T18:57:11
null
## Describe the bug When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break: Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453) 90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ...
MotzWanted
https://github.com/huggingface/datasets/issues/2885
null
false
992,135,698
2,884
Add IC, SI, ER tasks to SUPERB
closed
[ "Sorry for the late PR, uploading 10+GB files to the hub through a VPN was an adventure :sweat_smile: ", "Thank you so much for adding these subsets @anton-l! \r\n\r\n> These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingfac...
2021-09-09T11:56:03
2021-09-20T09:17:58
2021-09-20T09:00:49
This PR adds 3 additional classification tasks to SUPERB #### Intent Classification Dataset URL seems to be down at the moment :( See the note below. S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/...
anton-l
https://github.com/huggingface/datasets/pull/2884
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2884", "html_url": "https://github.com/huggingface/datasets/pull/2884", "diff_url": "https://github.com/huggingface/datasets/pull/2884.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2884.patch", "merged_at": "2021-09-20T09:00...
true
991,969,875
2,883
Fix data URLs and metadata in DocRED dataset
closed
[]
2021-09-09T08:55:34
2021-09-13T11:24:31
2021-09-13T11:24:31
The host of `docred` dataset has updated the `dev` data file. This PR: - Updates the dev URL - Updates dataset metadata This PR also fixes the URL of the `train_distant` split, which was wrong. Fix #2882.
albertvillanova
https://github.com/huggingface/datasets/pull/2883
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2883", "html_url": "https://github.com/huggingface/datasets/pull/2883", "diff_url": "https://github.com/huggingface/datasets/pull/2883.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2883.patch", "merged_at": "2021-09-13T11:24...
true
991,800,141
2,882
`load_dataset('docred')` results in a `NonMatchingChecksumError`
closed
[ "Hi @tmpr, thanks for reporting.\r\n\r\nTwo weeks ago (23th Aug), the host of the source `docred` dataset updated one of the files (`dev.json`): you can see it [here](https://drive.google.com/drive/folders/1c5-0YwnoJx8NS6CV2f-NoTHR__BdkNqw).\r\n\r\nTherefore, the checksum needs to be updated.\r\n\r\nNormally, in th...
2021-09-09T05:55:02
2021-09-13T11:24:30
2021-09-13T11:24:30
## Describe the bug I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`. ## Steps to reproduce the bug It is quasi only this code: ```python import datasets data = datasets.load_dataset('docred') ``` ## ...
tmpr
https://github.com/huggingface/datasets/issues/2882
null
false
991,639,142
2,881
Add BIOSSES dataset
closed
[]
2021-09-09T00:35:36
2021-09-13T14:20:40
2021-09-13T14:20:40
Adding the biomedical semantic sentence similarity dataset, BIOSSES, listed in "Biomedical Datasets - BigScience Workshop 2021"
bwang482
https://github.com/huggingface/datasets/pull/2881
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2881", "html_url": "https://github.com/huggingface/datasets/pull/2881", "diff_url": "https://github.com/huggingface/datasets/pull/2881.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2881.patch", "merged_at": "2021-09-13T14:20...
true
990,877,940
2,880
Extend support for streaming datasets that use pathlib.Path stem/suffix
closed
[]
2021-09-08T08:42:43
2021-09-09T13:13:29
2021-09-09T13:13:29
This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the properties `pathlib.Path.stem` and `pathlib.Path.suffix`. Related to #2876, #2874, #2866. CC: @severo
albertvillanova
https://github.com/huggingface/datasets/pull/2880
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2880", "html_url": "https://github.com/huggingface/datasets/pull/2880", "diff_url": "https://github.com/huggingface/datasets/pull/2880.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2880.patch", "merged_at": "2021-09-09T13:13...
true
990,257,404
2,879
In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?"
closed
[ "Hi @rcgale, thanks for reporting.\r\n\r\nPlease note that this bug was fixed on `datasets` version 1.5.0: https://github.com/huggingface/datasets/commit/a23c73e526e1c30263834164f16f1fdf76722c8c#diff-f12a7a42d4673bb6c2ca5a40c92c29eb4fe3475908c84fd4ce4fad5dc2514878\r\n\r\nIf you update `datasets` version, that shoul...
2021-09-07T18:53:45
2021-09-08T16:55:19
2021-09-08T09:12:28
## Describe the bug Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same. ## Steps to reproduce the bug I was following this tutorial - https://huggingface.co/blog/fine-tune-wav2vec2-english But here's a distilled repro: ```python !pip install datasets==1.4.1 from datasets import load_datas...
rcgale
https://github.com/huggingface/datasets/issues/2879
null
false
990,093,316
2,878
NotADirectoryError: [WinError 267] During load_from_disk
open
[]
2021-09-07T15:15:05
2021-09-07T15:15:05
null
## Describe the bug Trying to load saved dataset or dataset directory from Amazon S3 on a Windows machine fails. Performing the same operation succeeds on non-windows environment (AWS Sagemaker). ## Steps to reproduce the bug ```python # Followed https://huggingface.co/docs/datasets/filesystems.html#loading-a-pr...
Grassycup
https://github.com/huggingface/datasets/issues/2878
null
false
990,027,249
2,877
Don't keep the dummy data folder or dataset_infos.json when resolving data files
closed
[ "Hi @lhoestq I am new to huggingface datasets, I would like to work on this issue!\r\n", "Thanks for the help :) \r\n\r\nAs mentioned in the PR, excluding files named \"dummy_data.zip\" is actually more general than excluding the files inside a \"dummy\" folder. I just did the change in the PR, I think we can mer...
2021-09-07T14:09:04
2021-09-29T09:05:38
2021-09-29T09:05:38
When there's no dataset script, all the data files of a folder or a repository on the Hub are loaded as data files. There are already a few exceptions: - files starting with "." are ignored - the dataset card "README.md" is ignored - any file named "config.json" is ignored (currently it isn't used anywhere, but i...
lhoestq
https://github.com/huggingface/datasets/issues/2877
null
false
990,001,079
2,876
Extend support for streaming datasets that use pathlib.Path.glob
closed
[ "I am thinking that ideally we should call `fs.glob()` instead...", "Thanks, @lhoestq: the idea of adding the mock filesystem is to avoid network calls and reduce testing time ;) \r\n\r\nI have added `rglob` as well and fixed some bugs." ]
2021-09-07T13:43:45
2021-09-10T09:50:49
2021-09-10T09:50:48
This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the method `pathlib.Path.glob`. Related to #2874, #2866. CC: @severo
albertvillanova
https://github.com/huggingface/datasets/pull/2876
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2876", "html_url": "https://github.com/huggingface/datasets/pull/2876", "diff_url": "https://github.com/huggingface/datasets/pull/2876.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2876.patch", "merged_at": "2021-09-10T09:50...
true
989,919,398
2,875
Add Congolese Swahili speech datasets
open
[]
2021-09-07T12:13:50
2021-09-07T12:13:50
null
## Adding a Dataset - **Name:** Congolese Swahili speech corpora - **Data:** https://gamayun.translatorswb.org/data/ Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Also related: https://mobile.twitter.com/OktemAlp/status/14351963936...
osanseviero
https://github.com/huggingface/datasets/issues/2875
null
false
989,685,328
2,874
Support streaming datasets that use pathlib
closed
[ "I've tried https://github.com/huggingface/datasets/issues/2866 again, and I get the same error.\r\n\r\n```python\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```", "@severo Issue #2866 is not fully fixed yet: multiple patches need to be implemented for `pathlib`, as...
2021-09-07T07:35:49
2021-09-07T18:25:22
2021-09-07T11:41:15
This PR extends the support in streaming mode for datasets that use `pathlib.Path`. Related to: #2866. CC: @severo
albertvillanova
https://github.com/huggingface/datasets/pull/2874
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2874", "html_url": "https://github.com/huggingface/datasets/pull/2874", "diff_url": "https://github.com/huggingface/datasets/pull/2874.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2874.patch", "merged_at": "2021-09-07T11:41...
true
989,587,695
2,873
adding swedish_medical_ner
closed
[ "Hi, what's the current status of this request? It says Changes requested, but I can't see what changes?", "Hi, it looks like this PR includes changes to other files that `swedish_medical_ner`.\r\n\r\nFeel free to remove these changes, or simply create a new PR that only contains the addition of the dataset" ]
2021-09-07T04:44:53
2021-09-17T20:47:37
2021-09-17T20:47:37
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021" Code refactored
bwang482
https://github.com/huggingface/datasets/pull/2873
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2873", "html_url": "https://github.com/huggingface/datasets/pull/2873", "diff_url": "https://github.com/huggingface/datasets/pull/2873.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2873.patch", "merged_at": null }
true
989,453,069
2,872
adding swedish_medical_ner
closed
[]
2021-09-06T22:00:52
2021-09-07T04:36:32
2021-09-07T04:36:32
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
bwang482
https://github.com/huggingface/datasets/pull/2872
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2872", "html_url": "https://github.com/huggingface/datasets/pull/2872", "diff_url": "https://github.com/huggingface/datasets/pull/2872.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2872.patch", "merged_at": null }
true
989,436,088
2,871
datasets.config.PYARROW_VERSION has no attribute 'major'
closed
[ "I have changed line 288 to `if int(datasets.config.PYARROW_VERSION.split(\".\")[0]) < 3:` just to get around it.", "Hi @bwang482,\r\n\r\nI'm sorry but I'm not able to reproduce your bug.\r\n\r\nPlease note that in our current master branch, we made a commit (d03223d4d64b89e76b48b00602aba5aa2f817f1e) that simulta...
2021-09-06T21:06:57
2021-09-08T08:51:52
2021-09-08T08:51:52
In the test_dataset_common.py script, line 288-289 ``` if datasets.config.PYARROW_VERSION.major < 3: packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"] ``` which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested thi...
bwang482
https://github.com/huggingface/datasets/issues/2871
null
false
988,276,859
2,870
Fix three typos in two files for documentation
closed
[]
2021-09-04T11:49:43
2021-09-06T08:21:21
2021-09-06T08:19:35
Changed "bacth_size" to "batch_size" (2x) Changed "intsructions" to "instructions"
leny-mi
https://github.com/huggingface/datasets/pull/2870
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2870", "html_url": "https://github.com/huggingface/datasets/pull/2870", "diff_url": "https://github.com/huggingface/datasets/pull/2870.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2870.patch", "merged_at": "2021-09-06T08:19...
true