id
int64
599M
3.48B
number
int64
1
7.8k
title
stringlengths
1
290
state
stringclasses
2 values
comments
listlengths
0
30
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-10-05 06:37:50
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-10-05 10:32:43
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-10-01 13:56:03
βŒ€
body
stringlengths
0
228k
βŒ€
user
stringlengths
3
26
html_url
stringlengths
46
51
pull_request
dict
is_pull_request
bool
2 classes
1,038,404,300
3,173
Fix issue with filelock filename being too long on encrypted filesystems
closed
[]
2021-10-28T11:28:57
2021-10-29T09:42:24
2021-10-29T09:42:24
Infer max filename length in filelock on Unix-like systems. Should fix problems on encrypted filesystems such as eCryptfs. Fix #2924 cc: @lmmx
mariosasko
https://github.com/huggingface/datasets/pull/3173
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3173", "html_url": "https://github.com/huggingface/datasets/pull/3173", "diff_url": "https://github.com/huggingface/datasets/pull/3173.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3173.patch", "merged_at": "2021-10-29T09:42...
true
1,038,351,587
3,172
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
closed
[ "NB: even if the error is raised, the dataset is successfully cached. So restarting the script after every `map()` allows to ultimately run the whole preprocessing. But this prevents to realistically run the code over multiple nodes.", "Hi,\r\n\r\nIt's not easy to debug the problem without the script. I may be wr...
2021-10-28T10:29:00
2024-04-02T18:13:21
2021-11-03T11:26:10
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent ...
vlievin
https://github.com/huggingface/datasets/issues/3172
null
false
1,037,728,059
3,171
Raise exceptions instead of using assertions for control flow
closed
[ "Adding the remaining tasks for this issue to help new code contributors. \r\n$ cd src/datasets && ack assert -lc \r\n- [x] commands/convert.py:1\r\n- [x] arrow_reader.py:3\r\n- [x] load.py:7\r\n- [x] utils/py_utils.py:2\r\n- [x] features/features.py:9\r\n- [x] arrow_writer.py:7\r\n- [x] search.py:6\r\n- [x] table...
2021-10-27T18:26:52
2021-12-23T16:40:37
2021-12-23T16:40:37
Motivated by https://github.com/huggingface/transformers/issues/12789 in Transformers, one welcoming change would be replacing assertions with proper exceptions. The only type of assertions we should keep are those used as sanity checks. Currently, there is a total of 87 files with the `assert` statements (located u...
mariosasko
https://github.com/huggingface/datasets/issues/3171
null
false
1,037,601,926
3,170
Preserve ordering in `zip_dict`
closed
[]
2021-10-27T16:07:30
2021-10-29T13:09:37
2021-10-29T13:09:37
Replace `set` with the `unique_values` generator in `zip_dict`. This PR fixes the problem with the different ordering of the example keys across different Python sessions caused by the `zip_dict` call in `Features.decode_example`.
mariosasko
https://github.com/huggingface/datasets/pull/3170
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3170", "html_url": "https://github.com/huggingface/datasets/pull/3170", "diff_url": "https://github.com/huggingface/datasets/pull/3170.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3170.patch", "merged_at": "2021-10-29T13:09...
true
1,036,773,357
3,169
Configurable max filename length in file locks
closed
[ "I've also added environment variable configuration so that this can be configured once per machine (e.g. in a `.bashrc` file), as is already done for a few other config variables here.", "Cancelling PR in favour of @mariosasko's in #3173" ]
2021-10-26T21:52:55
2021-10-28T16:14:14
2021-10-28T16:14:13
Resolve #2924 (https://github.com/huggingface/datasets/issues/2924#issuecomment-952330956) wherein the assumption of file lock maximum filename length to be 255 raises an OSError on encrypted drives (ecryptFS on Linux uses part of the lower filename, reducing the maximum filename size to 143). Allowing this limit to be...
lmmx
https://github.com/huggingface/datasets/pull/3169
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3169", "html_url": "https://github.com/huggingface/datasets/pull/3169", "diff_url": "https://github.com/huggingface/datasets/pull/3169.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3169.patch", "merged_at": null }
true
1,036,673,263
3,168
OpenSLR/83 is empty
closed
[ "Hi @tyrius02, thanks for reporting. I see you self-assigned this issue: are you working on this?", "@albertvillanova Yes. Figured I introduced the broken config, I should fix it too.\r\n\r\nI've got it working, but I'm struggling with one of the tests. I've started a PR so I/we can work through it.", "Looks li...
2021-10-26T19:42:21
2021-10-29T10:04:09
2021-10-29T10:04:09
## Describe the bug As the summary says, openslr / SLR83 / train is empty. The dataset returned after loading indicates there are **zero** rows. The correct number should be **17877**. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('openslr', 'SLR83') ``` ## Expected resul...
tyrius02
https://github.com/huggingface/datasets/issues/3168
null
false
1,036,488,992
3,167
bookcorpusopen no longer works
closed
[ "Hi ! Thanks for reporting :) I think #3280 should fix this", "I tried with the latest changes from #3280 on google colab and it worked fine :)\r\nWe'll do a new release soon, in the meantime you can use the updated version with:\r\n```python\r\nload_dataset(\"bookcorpusopen\", revision=\"master\")\r\n```", "Fi...
2021-10-26T16:06:15
2021-11-17T15:53:46
2021-11-17T15:53:46
## Describe the bug When using the latest version of datasets (1.14.0), I cannot use the `bookcorpusopen` dataset. The process blocks always around `9924 examples [00:06, 1439.61 examples/s]` when preparing the dataset. I also noticed that after half an hour the process is automatically killed because of the RAM usa...
lucadiliello
https://github.com/huggingface/datasets/issues/3167
null
false
1,036,450,283
3,166
Deprecate prepare_module
closed
[ "Sounds good, thanks !" ]
2021-10-26T15:28:24
2021-11-05T09:27:37
2021-11-05T09:27:36
In version 1.13, `prepare_module` was deprecated. This PR adds a deprecation warning and removes it from all the library, using `dataset_module_factory` or `metric_module_factory` instead. Fix #3165.
albertvillanova
https://github.com/huggingface/datasets/pull/3166
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3166", "html_url": "https://github.com/huggingface/datasets/pull/3166", "diff_url": "https://github.com/huggingface/datasets/pull/3166.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3166.patch", "merged_at": "2021-11-05T09:27...
true
1,036,448,998
3,165
Deprecate prepare_module
closed
[]
2021-10-26T15:27:15
2021-11-05T09:27:36
2021-11-05T09:27:36
In version 1.13, `prepare_module` was deprecated. Add deprecation warning and remove its usage from all the library.
albertvillanova
https://github.com/huggingface/datasets/issues/3165
null
false
1,035,662,830
3,164
Add raw data files to the Hub with GitHub LFS for canonical dataset
closed
[ "Hi @zlucia, I would actually suggest hosting the dataset as a huggingface.co-hosted dataset.\r\n\r\nThe only difference with a \"canonical\"/legacy dataset is that it's nested under an organization (here `stanford` or `stanfordnlp` for instance – completely up to you) but then you can upload your data using git-lf...
2021-10-25T23:28:21
2021-10-30T19:54:51
2021-10-30T19:54:51
I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term storage solution, compared to other storage solutions available to my team...
zlucia
https://github.com/huggingface/datasets/issues/3164
null
false
1,035,475,061
3,163
Add Image feature
closed
[ "Awesome, looking forward to using it :)", "Few additional comments:\r\n* the current API doesn't meet the requirements mentioned in #3145 (e.g. image mime-type). However, this will be doable soon as we also plan to store image bytes alongside paths in arrow files (see https://github.com/huggingface/datasets/pull...
2021-10-25T19:07:48
2021-12-30T06:37:21
2021-12-06T17:49:02
Adds the Image feature. This feature is heavily inspired by the recently added Audio feature (#2324). Currently, this PR is pretty simple. Some considerations that need further discussion: * I've decided to use `Pillow`/`PIL` as the image decoding library. Another candidate I considered is `torchvision`, mostly bec...
mariosasko
https://github.com/huggingface/datasets/pull/3163
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3163", "html_url": "https://github.com/huggingface/datasets/pull/3163", "diff_url": "https://github.com/huggingface/datasets/pull/3163.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3163.patch", "merged_at": "2021-12-06T17:49...
true
1,035,462,136
3,162
`datasets-cli test` should work with datasets without scripts
open
[ "> It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not).\r\n> \r\n> I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeas...
2021-10-25T18:52:30
2021-11-25T16:04:29
null
It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/t...
sashavor
https://github.com/huggingface/datasets/issues/3162
null
false
1,035,444,292
3,161
Add riddle_sense dataset
closed
[ "@lhoestq \r\nI address all the comments, I think. Thanks! \r\n", "The five test fails are unrelated to this PR and fixed on master so we can ignore them" ]
2021-10-25T18:30:56
2021-11-04T14:01:15
2021-11-04T14:01:15
Adding a new dataset for QA with riddles. I'm confused about the tagging process because it looks like the streamlit app loads data from the current repo, so is it something that should be done after merging or off my fork?
ziyiwu9494
https://github.com/huggingface/datasets/pull/3161
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3161", "html_url": "https://github.com/huggingface/datasets/pull/3161", "diff_url": "https://github.com/huggingface/datasets/pull/3161.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3161.patch", "merged_at": "2021-11-04T14:01...
true
1,035,274,640
3,160
Better error msg if `len(predictions)` doesn't match `len(references)` in metrics
closed
[ "Can't test this now but it may be a good improvement indeed.", "I added a function, but it only works with the `list` type. For arrays/tensors, we delegate formatting to the frameworks. " ]
2021-10-25T15:25:05
2021-11-05T11:44:59
2021-11-05T09:31:02
Improve the error message in `Metric.add_batch` if `len(predictions)` doesn't match `len(references)`. cc: @BramVanroy (feel free to test this code on your examples and review this PR)
mariosasko
https://github.com/huggingface/datasets/pull/3160
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3160", "html_url": "https://github.com/huggingface/datasets/pull/3160", "diff_url": "https://github.com/huggingface/datasets/pull/3160.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3160.patch", "merged_at": "2021-11-05T09:31...
true
1,035,174,560
3,159
Make inspect.get_dataset_config_names always return a non-empty list
closed
[ "This PR is already working (although not very beautiful; see below): the idea was to have the `DatasetModule.builder_kwargs` accessible from the `builder_cls`, so that this can generate the default builder config (at the class level, without requiring the builder to be instantiated).\r\n\r\nI have a plan for a fol...
2021-10-25T13:59:43
2021-10-29T13:14:37
2021-10-28T05:44:49
Make all named configs cases, so that no special unnamed config case needs to be handled differently. Fix #3135.
albertvillanova
https://github.com/huggingface/datasets/pull/3159
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3159", "html_url": "https://github.com/huggingface/datasets/pull/3159", "diff_url": "https://github.com/huggingface/datasets/pull/3159.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3159.patch", "merged_at": "2021-10-28T05:44...
true
1,035,158,070
3,158
Fix string encoding for Value type
closed
[ "That was fast! \r\n" ]
2021-10-25T13:44:13
2021-10-25T14:12:06
2021-10-25T14:12:05
Some metrics have `string` features but currently it fails if users pass integers instead. Indeed feature encoding that handles the conversion of the user's objects to the right python type is missing a case for `string`, while it already works as expected for integers, floats and booleans Here is an example code th...
lhoestq
https://github.com/huggingface/datasets/pull/3158
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3158", "html_url": "https://github.com/huggingface/datasets/pull/3158", "diff_url": "https://github.com/huggingface/datasets/pull/3158.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3158.patch", "merged_at": "2021-10-25T14:12...
true
1,034,775,165
3,157
Fixed: duplicate parameter and missing parameter in docstring
closed
[]
2021-10-25T07:26:00
2021-10-25T14:02:19
2021-10-25T14:02:19
changing duplicate parameter `data_files` in `DatasetBuilder.__init__` to the missing parameter `data_dir`
PanQiWei
https://github.com/huggingface/datasets/pull/3157
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3157", "html_url": "https://github.com/huggingface/datasets/pull/3157", "diff_url": "https://github.com/huggingface/datasets/pull/3157.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3157.patch", "merged_at": "2021-10-25T14:02...
true
1,034,468,757
3,155
Illegal instruction (core dumped) at datasets import
closed
[ "It seems to be an issue with how conda-forge is building the binaries. It works on some machines, but not a machine with AMD Opteron 8384 processors." ]
2021-10-24T17:21:36
2021-11-18T19:07:04
2021-11-18T19:07:03
## Describe the bug I install datasets using conda and when I import datasets I get: "Illegal instruction (core dumped)" ## Steps to reproduce the bug ``` conda create --prefix path/to/env conda activate path/to/env conda install -c huggingface -c conda-forge datasets # exits with output "Illegal instruction...
hacobe
https://github.com/huggingface/datasets/issues/3155
null
false
1,034,361,806
3,154
Sacrebleu unexpected behaviour/requirement for data format
closed
[ "Hi @BramVanroy!\r\n\r\nGood question. This project relies on PyArrow (tables) to store data too big to fit in RAM. In the case of metrics, this means that the number of predictions and references has to match to form a table.\r\n\r\nThat's why your example throws an error even though it matches the schema:\r\n```p...
2021-10-24T08:55:33
2021-10-31T09:08:32
2021-10-31T09:08:31
## Describe the bug When comparing with the original `sacrebleu` implementation, the `datasets` implementation does some strange things that I do not quite understand. This issue was triggered when I was trying to implement TER and found the datasets implementation of BLEU [here](https://github.com/huggingface/dataset...
BramVanroy
https://github.com/huggingface/datasets/issues/3154
null
false
1,034,179,198
3,153
Add TER (as implemented in sacrebleu)
closed
[ "The problem appears to stem from the omission of the lines that you mentioned. If you add them back and try examples from [this](https://huggingface.co/docs/datasets/using_metrics.html) tutorial (sacrebleu metric example) the code you implemented works fine.\r\n\r\nI think the purpose of these lines is follows:\r\...
2021-10-23T14:26:45
2021-11-02T11:04:11
2021-11-02T11:04:11
Implements TER (Translation Edit Rate) as per its implementation in sacrebleu. Sacrebleu for BLEU scores is already implemented in `datasets` so I thought this would be a nice addition. I started from the sacrebleu implementation, as the two metrics have a lot in common. Verified with sacrebleu's [testing suite](...
BramVanroy
https://github.com/huggingface/datasets/pull/3153
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3153", "html_url": "https://github.com/huggingface/datasets/pull/3153", "diff_url": "https://github.com/huggingface/datasets/pull/3153.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3153.patch", "merged_at": "2021-11-02T11:04...
true
1,034,039,379
3,152
Fix some typos in the documentation
closed
[]
2021-10-23T01:38:35
2021-10-25T14:27:36
2021-10-25T14:03:48
null
h4iku
https://github.com/huggingface/datasets/pull/3152
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3152", "html_url": "https://github.com/huggingface/datasets/pull/3152", "diff_url": "https://github.com/huggingface/datasets/pull/3152.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3152.patch", "merged_at": "2021-10-25T14:03...
true
1,033,890,501
3,151
Re-add faiss to windows testing suite
closed
[]
2021-10-22T19:34:29
2021-11-02T10:47:34
2021-11-02T10:06:03
In recent versions, `faiss-cpu` seems to be available for Windows as well. See the [PyPi page](https://pypi.org/project/faiss-cpu/#files) to confirm. We can therefore included it for Windows in the setup file. At first tests didn't pass due to problems with permissions as caused by `NamedTemporaryFile` on Windows. T...
BramVanroy
https://github.com/huggingface/datasets/pull/3151
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3151", "html_url": "https://github.com/huggingface/datasets/pull/3151", "diff_url": "https://github.com/huggingface/datasets/pull/3151.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3151.patch", "merged_at": "2021-11-02T10:06...
true
1,033,831,530
3,150
Faiss _is_ available on Windows
closed
[ "Sure, feel free to open a PR." ]
2021-10-22T18:07:16
2021-11-02T10:06:03
2021-11-02T10:06:03
In the setup file, I find the following: https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/setup.py#L171 However, FAISS does install perfectly fine on Windows on my system. You can also confirm this on the [PyPi page](https://pypi.org/project/faiss-cpu/#files), where Windows wh...
BramVanroy
https://github.com/huggingface/datasets/issues/3150
null
false
1,033,747,625
3,149
Add CMU Hinglish DoG Dataset for MT
closed
[ "Hi @lhoestq, thanks a lot for the help. I have moved the part as suggested. \r\nAlthough still while running the dummy data script, I face this issue\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ishan/anaconda3/bin/datasets-cli\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/...
2021-10-22T16:17:25
2021-11-15T11:36:42
2021-11-15T10:27:45
Address part of #2841 Added the CMU Hinglish DoG Dataset as in GLUECoS. Added it as a seperate dataset as unlike other tasks of GLUE CoS this can't be evaluated for a BERT like model. Consists of parallel dataset between Hinglish (Hindi-English) and English, can be used for Machine Translation between the two. ...
Ishan-Kumar2
https://github.com/huggingface/datasets/pull/3149
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3149", "html_url": "https://github.com/huggingface/datasets/pull/3149", "diff_url": "https://github.com/huggingface/datasets/pull/3149.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3149.patch", "merged_at": "2021-11-15T10:27...
true
1,033,685,208
3,148
Streaming with num_workers != 0
closed
[ "I can confirm that I was able to reproduce the bug. This seems odd given that #3423 reports duplicate data retrieval when `num_workers` and `streaming` are used together, which is obviously different from what is reported here. ", "Any update? A possible solution is to have multiple arrow files as shards, and ha...
2021-10-22T15:07:17
2022-07-04T12:14:58
2022-07-04T12:14:58
## Describe the bug When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch. The code owner is likely @lhoestq ## Steps to reproduce the bug For your convenience, we've prepped a colab notebook th...
justheuristic
https://github.com/huggingface/datasets/issues/3148
null
false
1,033,607,659
3,147
Fix CLI test to ignore verfications when saving infos
closed
[]
2021-10-22T13:52:46
2021-10-27T08:01:50
2021-10-27T08:01:49
Fix #3146.
albertvillanova
https://github.com/huggingface/datasets/pull/3147
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3147", "html_url": "https://github.com/huggingface/datasets/pull/3147", "diff_url": "https://github.com/huggingface/datasets/pull/3147.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3147.patch", "merged_at": "2021-10-27T08:01...
true
1,033,605,947
3,146
CLI test command throws NonMatchingSplitsSizesError when saving infos
closed
[]
2021-10-22T13:50:53
2021-10-27T08:01:49
2021-10-27T08:01:49
When trying to generate a datset JSON metadata, a `NonMatchingSplitsSizesError` is thrown: ``` $ datasets-cli test datasets/arabic_billion_words --save_infos --all_configs Testing builder 'Alittihad' (1/10) Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: Unknown si...
albertvillanova
https://github.com/huggingface/datasets/issues/3146
null
false
1,033,580,009
3,145
[when Image type will exist] provide a way to get the data as binary + filename
closed
[ "@severo, maybe somehow related to this PR ?\r\n- #3129", "@severo I'll keep that in mind.\r\n\r\nYou can track progress on the Image feature in #3163 (still in the early stage). ", "Hi ! As discussed with @severo offline it looks like the dataset viewer already supports reading PIL images, so maybe the datase...
2021-10-22T13:23:49
2021-12-22T11:05:37
2021-12-22T11:05:36
**Is your feature request related to a problem? Please describe.** When a dataset cell contains a value of type Image (be it from a remote URL, an Array2D/3D, or any other way to represent images), I want to be able to write the image to the disk, with the correct filename, and optionally to know its mimetype, in or...
severo
https://github.com/huggingface/datasets/issues/3145
null
false
1,033,573,760
3,144
Infer the features if missing
closed
[ "Done by @lhoestq here: https://github.com/huggingface/datasets/pull/4500 (https://github.com/huggingface/datasets/pull/4500/files#diff-02930e1d966f4b41f9ddf15d961f16f5466d9bee583138657018c7329f71aa43R1255 in particular)\r\n" ]
2021-10-22T13:17:33
2022-09-08T08:23:10
2022-09-08T08:23:10
**Is your feature request related to a problem? Please describe.** Some datasets, in particular community datasets, have no info file, thus no features. **Describe the solution you'd like** If a dataset has no features, the first loaded data (5-10 rows) could be used to infer the type. Related: `datasets` w...
severo
https://github.com/huggingface/datasets/issues/3144
null
false
1,033,569,655
3,143
Provide a way to check if the features (in info) match with the data of a split
open
[ "Related: #3144 " ]
2021-10-22T13:13:36
2021-10-22T13:17:56
null
**Is your feature request related to a problem? Please describe.** I understand that currently the data loaded has not always the type described in the info features **Describe the solution you'd like** Provide a way to check if the rows have the type described by info features **Describe alternatives you'v...
severo
https://github.com/huggingface/datasets/issues/3143
null
false
1,033,566,034
3,142
Provide a way to write a streamed dataset to the disk
open
[ "Yes, I agree this feature is much needed. We could do something similar to what TF does (https://www.tensorflow.org/api_docs/python/tf/data/Dataset#cache). \r\n\r\nIdeally, if the entire streamed dataset is consumed/cached, the generated cache should be reusable for the Arrow dataset.", "@mariosasko Hi big broth...
2021-10-22T13:09:53
2024-01-12T07:26:43
null
**Is your feature request related to a problem? Please describe.** The streaming mode allows to get the 100 first rows of a dataset very quickly. But it does not cache the answer, so a posterior call to get the same 100 rows will send a request to the server again and again. **Describe the solution you'd like** ...
severo
https://github.com/huggingface/datasets/issues/3142
null
false
1,033,555,910
3,141
Fix caching bugs
closed
[]
2021-10-22T12:59:25
2021-10-22T20:52:08
2021-10-22T13:47:05
This PR fixes some caching bugs (most likely introduced in the latest refactor): * remove ")" added by accident in the dataset dir name * correctly pass the namespace kwargs in `CachedDatasetModuleFactory` * improve the warning message if `HF_DATASETS_OFFLINE is `True`
mariosasko
https://github.com/huggingface/datasets/pull/3141
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3141", "html_url": "https://github.com/huggingface/datasets/pull/3141", "diff_url": "https://github.com/huggingface/datasets/pull/3141.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3141.patch", "merged_at": "2021-10-22T13:47...
true
1,033,524,079
3,139
Fix file/directory deletion on Windows
open
[]
2021-10-22T12:22:08
2021-10-22T12:22:08
null
Currently, on Windows, some attempts to delete a dataset file/directory will fail with the `PerimissionError`. Examples: - download a dataset, then force redownload it in the same session while keeping a reference to the downloaded dataset ```python from datasets import load_dataset dset = load_dataset("sst", s...
mariosasko
https://github.com/huggingface/datasets/issues/3139
null
false
1,033,379,997
3,138
More fine-grained taxonomy of error types
open
[ "related: #4995\r\n" ]
2021-10-22T09:35:29
2022-09-20T13:04:42
null
**Is your feature request related to a problem? Please describe.** Exceptions like `FileNotFoundError` can be raised by different parts of the code, and it's hard to detect which one did **Describe the solution you'd like** Give a specific exception type for every group of similar errors **Describe alternat...
severo
https://github.com/huggingface/datasets/issues/3138
null
false
1,033,363,652
3,137
Fix numpy deprecation warning for ragged tensors
closed
[ "This'll be a really helpful fix, thank you!" ]
2021-10-22T09:17:46
2021-10-22T16:04:15
2021-10-22T16:04:14
Numpy shows a deprecation warning when we call `np.array` on a list of ragged tensors without specifying the `dtype`. If their shapes match, the tensors can be collated together, otherwise the resulting array should have `dtype=np.object`. Fix #3084 cc @Rocketknight1
lhoestq
https://github.com/huggingface/datasets/pull/3137
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3137", "html_url": "https://github.com/huggingface/datasets/pull/3137", "diff_url": "https://github.com/huggingface/datasets/pull/3137.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3137.patch", "merged_at": "2021-10-22T16:04...
true
1,033,360,396
3,136
Fix script of Arabic Billion Words dataset to return all data
closed
[]
2021-10-22T09:14:24
2021-10-22T13:28:41
2021-10-22T13:28:40
The script has a bug and only parses and generates a portion of the entire dataset. This PR fixes the loading script so that is properly parses the entire dataset. Current implementation generates the same number of examples as reported in the [original paper](https://arxiv.org/abs/1611.04033) for all configurat...
albertvillanova
https://github.com/huggingface/datasets/pull/3136
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3136", "html_url": "https://github.com/huggingface/datasets/pull/3136", "diff_url": "https://github.com/huggingface/datasets/pull/3136.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3136.patch", "merged_at": "2021-10-22T13:28...
true
1,033,294,299
3,135
Make inspect.get_dataset_config_names always return a non-empty list of configs
closed
[ "Hi @severo, I guess this issue requests not only to be able to access the configuration name (by using `inspect.get_dataset_config_names`), but the configuration itself as well (I mean you use the name to get the configuration afterwards, maybe using `builder_cls.builder_configs`), is this right?", "Yes, maybe t...
2021-10-22T08:02:50
2021-10-28T05:44:49
2021-10-28T05:44:49
**Is your feature request related to a problem? Please describe.** Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to **Describe the solution you'd like** In that sense inspect.get_dataset_config_names should always...
severo
https://github.com/huggingface/datasets/issues/3135
null
false
1,033,251,755
3,134
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py
closed
[ "Hi,\r\n\r\nDid you try to run the code multiple times (GitHub URLs can be down sometimes for various reasons)? I can access `https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py`, so this code is working without an error on my side. \r\n\r\nAdditionally, can you please run the `data...
2021-10-22T07:07:52
2023-09-14T01:19:45
2022-01-19T14:02:31
datasets version: 1.12.1 `metric = datasets.load_metric('rouge')` The error: > ConnectionError Traceback (most recent call last) > <ipython-input-3-dd10a0c5212f> in <module> > ----> 1 metric = datasets.load_metric('rouge') > > /usr/local/lib/python3.6/dist-packages/datasets/load....
yanan1116
https://github.com/huggingface/datasets/issues/3134
null
false
1,032,511,710
3,133
Support Audio feature in streaming mode
closed
[]
2021-10-21T13:37:57
2021-11-12T14:13:05
2021-11-12T14:13:04
Fix #3132.
albertvillanova
https://github.com/huggingface/datasets/pull/3133
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3133", "html_url": "https://github.com/huggingface/datasets/pull/3133", "diff_url": "https://github.com/huggingface/datasets/pull/3133.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3133.patch", "merged_at": "2021-11-12T14:13...
true
1,032,505,430
3,132
Support Audio feature in streaming mode
closed
[]
2021-10-21T13:32:18
2021-11-12T14:13:04
2021-11-12T14:13:04
Currently, Audio feature is only supported for non-streaming datasets. Due to the large size of many speech datasets, we should also support Audio feature in streaming mode.
albertvillanova
https://github.com/huggingface/datasets/issues/3132
null
false
1,032,309,865
3,131
Add ADE20k
closed
[ "I think we can close this issue since PR [#3607](https://github.com/huggingface/datasets/pull/3607) solves this." ]
2021-10-21T10:13:09
2023-01-27T14:40:20
2023-01-27T14:40:20
## Adding a Dataset - **Name:** ADE20k (actually it's called the MIT Scene Parsing Benchmark, it's actually a subset of ADE20k but a lot of authors still call it ADE20k) - **Description:** A semantic segmentation dataset, consisting of 150 classes. - **Paper:** http://people.csail.mit.edu/bzhou/publication/scene-par...
NielsRogge
https://github.com/huggingface/datasets/issues/3131
null
false
1,032,299,417
3,130
Create SECURITY.md
closed
[ "Hi @zidingz, thanks for your contribution.\r\n\r\nHowever I am closing it because it is a duplicate of a previous PR:\r\n - #2958\r\n\r\n" ]
2021-10-21T10:03:03
2021-10-21T14:33:28
2021-10-21T14:31:50
To let the repository confirm [email protected] as its security contact.
zidingz
https://github.com/huggingface/datasets/pull/3130
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3130", "html_url": "https://github.com/huggingface/datasets/pull/3130", "diff_url": "https://github.com/huggingface/datasets/pull/3130.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3130.patch", "merged_at": null }
true
1,032,234,167
3,129
Support Audio feature for TAR archives in sequential access
closed
[ "Also do you think we can adapt `cast_column` to keep the same value for this new parameter when the user only wants to change the sampling rate ?", "Thanks for your comments, @lhoestq, I will address them afterwards.\r\n\r\nBut, I think it is more important/urgent first address the current blocking non-passing t...
2021-10-21T08:56:51
2021-11-17T17:42:08
2021-11-17T17:42:07
Add Audio feature support for TAR archived files in sequential access. Fix #3128.
albertvillanova
https://github.com/huggingface/datasets/pull/3129
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3129", "html_url": "https://github.com/huggingface/datasets/pull/3129", "diff_url": "https://github.com/huggingface/datasets/pull/3129.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3129.patch", "merged_at": "2021-11-17T17:42...
true
1,032,201,870
3,128
Support Audio feature for TAR archives in sequential access
closed
[]
2021-10-21T08:23:01
2021-11-17T17:42:07
2021-11-17T17:42:07
Currently, Audio feature accesses each audio file by their file path. However, streamed TAR archive files do not allow random access to their archived files. Therefore, we should enhance the Audio feature to support TAR archived files in sequential access.
albertvillanova
https://github.com/huggingface/datasets/issues/3128
null
false
1,032,100,613
3,127
datasets-cli: convertion of a tfds dataset to a huggingface one.
open
[ "Hi,\r\n\r\nthe MNIST dataset is already available on the Hub. You can use it as follows:\r\n```python\r\nimport datasets\r\ndataset_dict = datasets.load_dataset(\"mnist\")\r\n```\r\n\r\nAs for the conversion of TFDS datasets to HF datasets, we will be working on it in the coming months, so stay tuned." ]
2021-10-21T06:14:27
2021-10-27T11:36:05
null
### Discussed in https://github.com/huggingface/datasets/discussions/3079 <div type='discussions-op-text'> <sup>Originally posted by **vitalyshalumov** October 14, 2021</sup> I'm trying to convert a tfds dataset to a huggingface one. I've tried: 1. datasets-cli convert --tfds_path ~/tensorflow_datas...
vitalyshalumov
https://github.com/huggingface/datasets/issues/3127
null
false
1,032,093,055
3,126
"arabic_billion_words" dataset does not create the full dataset
closed
[ "Thanks for reporting, @vitalyshalumov.\r\n\r\nApparently the script to parse the data has a bug, and does not generate the entire dataset.\r\n\r\nI'm fixing it." ]
2021-10-21T06:02:38
2021-10-22T13:28:40
2021-10-22T13:28:40
## Describe the bug When running: raw_dataset = load_dataset('arabic_billion_words','Alittihad') the correct dataset file is pulled from the url. But, the generated dataset includes just a small portion of the data included in the file. This is true for all other portions of the "arabic_billion_words" dataset ('A...
vitalyshalumov
https://github.com/huggingface/datasets/issues/3126
null
false
1,032,046,666
3,125
Add SLR83 to OpenSLR
closed
[]
2021-10-21T04:26:00
2021-10-22T20:10:05
2021-10-22T08:30:22
The PR resolves #3119, adding SLR83 (UK and Ireland dialects) to the previously created OpenSLR dataset.
tyrius02
https://github.com/huggingface/datasets/pull/3125
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3125", "html_url": "https://github.com/huggingface/datasets/pull/3125", "diff_url": "https://github.com/huggingface/datasets/pull/3125.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3125.patch", "merged_at": "2021-10-22T08:30...
true
1,031,976,286
3,124
More efficient nested features encoding
closed
[ "@lhoestq @albertvillanova @mariosasko\r\nCan you please check this out?", "Thanks, done!" ]
2021-10-21T01:55:31
2021-11-02T15:07:13
2021-11-02T11:04:04
Nested encoding of features wastes a lot of time on operations which are effectively doing nothing when lists are used. For example, if in the input we have a list of integers, `encoded_nested_example` will iterate over it and apply `encoded_nested_example` on every element even though it just return the int as is. ...
eladsegal
https://github.com/huggingface/datasets/pull/3124
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3124", "html_url": "https://github.com/huggingface/datasets/pull/3124", "diff_url": "https://github.com/huggingface/datasets/pull/3124.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3124.patch", "merged_at": "2021-11-02T11:04...
true
1,031,793,207
3,123
Segmentation fault when loading datasets from file
closed
[ "Hi ! I created an issue on Arrow's JIRA after making a minimum reproducible example\r\n\r\nhttps://issues.apache.org/jira/browse/ARROW-14439\r\n\r\n```python\r\nimport io\r\n\r\nimport pyarrow.json as paj\r\n\r\nbatch = b'{\"a\": [], \"b\": 1}\\n{\"b\": 1}'\r\nblock_size = 12\r\n\r\npaj.read_json(\r\n io.BytesI...
2021-10-20T20:16:11
2021-11-02T14:57:07
2021-11-02T14:57:07
## Describe the bug Custom dataset loading sometimes segfaults and kills the process if chunks contain a variety of features/ ## Steps to reproduce the bug Download an example file: ``` wget https://gist.githubusercontent.com/TevenLeScao/11e2184394b3fa47d693de2550942c6b/raw/4232704d08fbfcaf93e5b51def9e50515076...
TevenLeScao
https://github.com/huggingface/datasets/issues/3123
null
false
1,031,787,509
3,122
OSError with a custom dataset loading script
closed
[ "Hi,\r\n\r\nthere is a difference in how the `data_dir` is zipped between the `classla/janes_tag` and the `classla/reldi_hr` dataset. After unzipping, for the former, the data files (`*.conllup`) are in the root directory (root -> data files), and for the latter, they are inside the `data` directory (root -> `data`...
2021-10-20T20:08:39
2021-11-23T09:55:38
2021-11-23T09:55:38
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory struc...
suzanab
https://github.com/huggingface/datasets/issues/3122
null
false
1,031,673,115
3,121
Use huggingface_hub.HfApi to list datasets/metrics
closed
[]
2021-10-20T17:48:29
2021-11-05T11:45:08
2021-11-05T09:48:36
Delete `datasets.inspect.HfApi` and use `huggingface_hub.HfApi` instead. WIP until https://github.com/huggingface/huggingface_hub/pull/429 is merged, then wait for the new release of `huggingface_hub`, update the `huggingface_hub` version in `setup.py` and merge this PR. cc: @lhoestq
mariosasko
https://github.com/huggingface/datasets/pull/3121
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3121", "html_url": "https://github.com/huggingface/datasets/pull/3121", "diff_url": "https://github.com/huggingface/datasets/pull/3121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3121.patch", "merged_at": "2021-11-05T09:48...
true
1,031,574,511
3,120
Correctly update metadata to preserve features when concatenating datasets with axis=1
closed
[]
2021-10-20T15:54:58
2021-10-22T08:28:51
2021-10-21T14:50:21
This PR correctly updates metadata to preserve higher-level feature types (e.g. `ClassLabel`) in `datasets.concatenate_datasets` when `axis=1`. Previously, we would delete the feature metadata in `datasets.concatenate_datasets` if `axis=1` and restore the feature types from the arrow table schema in `Dataset.__init__`....
mariosasko
https://github.com/huggingface/datasets/pull/3120
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3120", "html_url": "https://github.com/huggingface/datasets/pull/3120", "diff_url": "https://github.com/huggingface/datasets/pull/3120.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3120.patch", "merged_at": "2021-10-21T14:50...
true
1,031,328,044
3,119
Add OpenSLR 83 - Crowdsourced high-quality UK and Ireland English Dialect speech
closed
[ "Ugh. The index files for SLR83 are CSV, not TSV. I need to add logic to process these index files." ]
2021-10-20T12:05:07
2021-10-22T19:00:52
2021-10-22T08:30:22
## Adding a Dataset - **Name:** *openslr** - **Description:** *Data set which contains male and female recordings of English from various dialects of the UK and Ireland.* - **Paper:** *https://www.openslr.org/resources/83/about.html* - **Data:** *Eleven separate data files can be found via https://www.openslr.org/r...
tyrius02
https://github.com/huggingface/datasets/issues/3119
null
false
1,031,309,549
3,118
Fix CI error at each release commit
closed
[]
2021-10-20T11:44:38
2021-10-20T13:02:36
2021-10-20T13:02:36
Fix test_load_dataset_canonical at release commit. Fix #3117.
albertvillanova
https://github.com/huggingface/datasets/pull/3118
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3118", "html_url": "https://github.com/huggingface/datasets/pull/3118", "diff_url": "https://github.com/huggingface/datasets/pull/3118.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3118.patch", "merged_at": "2021-10-20T13:02...
true
1,031,308,083
3,117
CI error at each release commit
closed
[]
2021-10-20T11:42:53
2021-10-20T13:02:35
2021-10-20T13:02:35
After 1.12.0, there is a recurrent CI error at each release commit: https://app.circleci.com/pipelines/github/huggingface/datasets/8289/workflows/665d954d-e409-4602-8202-e678594d2946/jobs/51110 ``` ____________________ LoadTest.test_load_dataset_canonical _____________________ [gw0] win32 -- Python 3.6.8 C:\tools\...
albertvillanova
https://github.com/huggingface/datasets/issues/3117
null
false
1,031,270,611
3,116
Update doc links to point to new docs
closed
[]
2021-10-20T11:00:47
2021-10-22T08:29:28
2021-10-22T08:26:45
This PR: * updates the README links and the ADD_NEW_DATASET template to point to the new docs (the new docs don't have a section with the list of all the possible features, so I added that info to the `Features` docstring, which is then referenced in the ADD_NEW_DATASET template) * fixes some broken links in the `.rs...
mariosasko
https://github.com/huggingface/datasets/pull/3116
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3116", "html_url": "https://github.com/huggingface/datasets/pull/3116", "diff_url": "https://github.com/huggingface/datasets/pull/3116.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3116.patch", "merged_at": "2021-10-22T08:26...
true
1,030,737,524
3,115
Fill in dataset card for NCBI disease dataset
closed
[]
2021-10-19T20:57:05
2021-10-22T08:25:07
2021-10-22T08:25:07
null
edugp
https://github.com/huggingface/datasets/pull/3115
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3115", "html_url": "https://github.com/huggingface/datasets/pull/3115", "diff_url": "https://github.com/huggingface/datasets/pull/3115.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3115.patch", "merged_at": "2021-10-22T08:25...
true
1,030,693,130
3,114
load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem
closed
[ "Hi ! Can you try again with pyarrow 6.0.0 ? I think it includes some changes regarding filesystems compatibility with fsspec.", "Hi @lhoestq! I ended up using `fsspec.implementations.arrow.HadoopFileSystem` which doesn't have the problem I described with pyarrow 5.0.0.\r\n\r\nI'll try again with `PyArrowHDFS` on...
2021-10-19T20:01:45
2022-02-14T14:00:28
2022-02-14T14:00:28
## Describe the bug Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Dataset` (in arrow_dataset.py) results in an error when calling the download method in the `fs` parameter. ## Steps to repr...
francisco-perez-sorrosal
https://github.com/huggingface/datasets/issues/3114
null
false
1,030,667,547
3,113
Loading Data from HDF files
closed
[ "I'm currently working on bringing [Ecoset](https://www.pnas.org/doi/10.1073/pnas.2011417118) to huggingface datasets and I would second this request...", "I would also like this support or something similar. Geospatial datasets come in netcdf which is derived from hdf5, or zarr. I've gotten zarr stores to work w...
2021-10-19T19:26:46
2025-08-19T13:28:54
2025-08-19T13:28:54
**Is your feature request related to a problem? Please describe.** More often than not I come along big HDF datasets, and currently there is no straight forward way to feed them to a dataset. **Describe the solution you'd like** I would love to see a `from_h5` method that gets an interface implemented by the user ...
FeryET
https://github.com/huggingface/datasets/issues/3113
null
false
1,030,613,083
3,112
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
open
[ "I am very unsure on why you tagged me here. I am not a maintainer of the Datasets library and have no idea how to help you.", "fixed", "Ok got it, tensor full of NaNs, cf.\r\n\r\n~\\anaconda3\\envs\\xxx\\lib\\site-packages\\datasets\\arrow_writer.py in write_examples_on_file(self)\r\n315 # This check fails wit...
2021-10-19T18:21:41
2021-10-19T18:52:29
null
## Describe the bug Despite having batches way under 2Gb when running `datasets.map()`, after processing correctly the data of the first batch without fuss and irrespective of writer_batch_size (say 2,4,8,16,32,64 and 128 in my case), it returns the following error : > OverflowError: There was an overflow in the <c...
BenoitDalFerro
https://github.com/huggingface/datasets/issues/3112
null
false
1,030,598,983
3,111
concatenate_datasets removes ClassLabel typing.
closed
[ "Something like this would fix it I think: https://github.com/huggingface/datasets/compare/master...Dref360:HF-3111/concatenate_types?expand=1" ]
2021-10-19T18:05:31
2021-10-21T14:50:21
2021-10-21T14:50:21
## Describe the bug When concatenating two datasets, we lose typing of ClassLabel columns. I can work on this if this is a legitimate bug, ## Steps to reproduce the bug ```python import datasets from datasets import Dataset, ClassLabel, Value, concatenate_datasets DS_LEN = 100 my_dataset = Dataset.from_...
Dref360
https://github.com/huggingface/datasets/issues/3111
null
false
1,030,558,484
3,110
Stream TAR-based dataset using iter_archive
closed
[ "I'm creating a new branch `stream-tar-audio` just for the audio datasets since they need https://github.com/huggingface/datasets/pull/3129 to be merged first", "The CI fails are only related to missing sections or tags in the dataset cards - which is unrelated to this PR" ]
2021-10-19T17:16:24
2021-11-05T17:48:49
2021-11-05T17:48:48
I converted all the dataset based on TAR archive to use iter_archive instead, so that they can be streamable. It means that around 80 datasets become streamable :)
lhoestq
https://github.com/huggingface/datasets/pull/3110
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3110", "html_url": "https://github.com/huggingface/datasets/pull/3110", "diff_url": "https://github.com/huggingface/datasets/pull/3110.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3110.patch", "merged_at": "2021-11-05T17:48...
true
1,030,543,284
3,109
Update BibTeX entry
closed
[]
2021-10-19T16:59:31
2021-10-19T17:13:28
2021-10-19T17:13:27
Update BibTeX entry.
albertvillanova
https://github.com/huggingface/datasets/pull/3109
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3109", "html_url": "https://github.com/huggingface/datasets/pull/3109", "diff_url": "https://github.com/huggingface/datasets/pull/3109.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3109.patch", "merged_at": "2021-10-19T17:13...
true
1,030,405,618
3,108
Add Google BLEU (aka GLEU) metric
closed
[]
2021-10-19T14:48:38
2021-10-25T14:07:04
2021-10-25T14:07:04
This PR adds the NLTK implementation of Google BLEU metric. This is also a part of an effort to resolve an unfortunate naming collision between GLEU for machine translation and GLEU for grammatical error correction. I used [this page](https://huggingface.co/docs/datasets/add_metric.html) for reference. Please, point ...
slowwavesleep
https://github.com/huggingface/datasets/pull/3108
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3108", "html_url": "https://github.com/huggingface/datasets/pull/3108", "diff_url": "https://github.com/huggingface/datasets/pull/3108.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3108.patch", "merged_at": "2021-10-25T14:07...
true
1,030,357,527
3,107
Add paper BibTeX citation
closed
[]
2021-10-19T14:08:11
2021-10-19T14:26:22
2021-10-19T14:26:21
Add paper BibTeX citation to README file.
albertvillanova
https://github.com/huggingface/datasets/pull/3107
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3107", "html_url": "https://github.com/huggingface/datasets/pull/3107", "diff_url": "https://github.com/huggingface/datasets/pull/3107.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3107.patch", "merged_at": "2021-10-19T14:26...
true
1,030,112,473
3,106
Fix URLs in blog_authorship_corpus dataset
closed
[]
2021-10-19T10:06:05
2021-10-19T12:50:40
2021-10-19T12:50:39
After contacting the authors of the paper "Effects of Age and Gender on Blogging", they confirmed: - the old URLs are no longer valid - there are alternative host URLs Fix #3091.
albertvillanova
https://github.com/huggingface/datasets/pull/3106
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3106", "html_url": "https://github.com/huggingface/datasets/pull/3106", "diff_url": "https://github.com/huggingface/datasets/pull/3106.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3106.patch", "merged_at": "2021-10-19T12:50...
true
1,029,098,843
3,105
download_mode=`force_redownload` does not work on removed datasets
open
[]
2021-10-18T13:12:38
2021-10-22T09:36:10
null
## Describe the bug If a cached dataset is removed from the library, I don't see how to delete it programmatically. I thought that using `force_redownload` would try to refresh the cache, then raise an exception, but it reuses the cache instead. ## Steps to reproduce the bug _requires to already have `wit` in ...
severo
https://github.com/huggingface/datasets/issues/3105
null
false
1,029,080,412
3,104
Missing Zenodo 1.13.3 release
closed
[ "Zenodo has fixed on their side the 1.13.3 release: https://zenodo.org/record/5589150" ]
2021-10-18T12:57:18
2021-10-22T13:22:25
2021-10-22T13:22:24
After `datasets` 1.13.3 release, this does not appear in Zenodo releases: https://zenodo.org/record/5570305 TODO: - [x] Contact Zenodo support - [x] Check it is fixed
albertvillanova
https://github.com/huggingface/datasets/issues/3104
null
false
1,029,069,310
3,103
Fix project description in PyPI
closed
[]
2021-10-18T12:47:29
2021-10-18T12:59:57
2021-10-18T12:59:56
Fix project description appearing in PyPI, so that it contains the content of the README.md file (like transformers). Currently, `datasets` project description appearing in PyPI shows the release instructions addressed to core maintainers: https://pypi.org/project/datasets/1.13.3/ Fix #3102.
albertvillanova
https://github.com/huggingface/datasets/pull/3103
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3103", "html_url": "https://github.com/huggingface/datasets/pull/3103", "diff_url": "https://github.com/huggingface/datasets/pull/3103.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3103.patch", "merged_at": "2021-10-18T12:59...
true
1,029,067,062
3,102
Unsuitable project description in PyPI
closed
[]
2021-10-18T12:45:00
2021-10-18T12:59:56
2021-10-18T12:59:56
Currently, `datasets` project description appearing in PyPI shows the release instructions addressed to core maintainers: https://pypi.org/project/datasets/1.13.3/
albertvillanova
https://github.com/huggingface/datasets/issues/3102
null
false
1,028,966,968
3,101
Update SUPERB to use Audio features
closed
[ "Thank you! Sorry I forgot this one @albertvillanova" ]
2021-10-18T11:05:18
2021-10-18T12:33:54
2021-10-18T12:06:46
This is the same dataset refresh as the other Audio ones: https://github.com/huggingface/datasets/pull/3081 cc @patrickvonplaten
anton-l
https://github.com/huggingface/datasets/pull/3101
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3101", "html_url": "https://github.com/huggingface/datasets/pull/3101", "diff_url": "https://github.com/huggingface/datasets/pull/3101.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3101.patch", "merged_at": "2021-10-18T12:06...
true
1,028,738,180
3,100
Replace FSTimeoutError with parent TimeoutError
closed
[]
2021-10-18T07:37:09
2021-10-18T07:51:55
2021-10-18T07:51:54
PR #3050 introduced a dependency on `fsspec.FSTiemoutError`. Note that this error only exists from `fsspec` version `2021.06.0` (June 2021). To fix #3097, there are 2 alternatives: - Either pinning `fsspec` to versions newer or equal to `2021.06.0` - Or replacing `fsspec.FSTimeoutError` wth its parent `asyncio.Tim...
albertvillanova
https://github.com/huggingface/datasets/pull/3100
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3100", "html_url": "https://github.com/huggingface/datasets/pull/3100", "diff_url": "https://github.com/huggingface/datasets/pull/3100.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3100.patch", "merged_at": "2021-10-18T07:51...
true
1,028,338,078
3,099
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
closed
[ "Hi @JTWang2000, thanks for reporting.\r\n\r\nHowever, I cannot reproduce your reported bug:\r\n```python\r\n>>> from datasets import load_dataset\r\n\r\n>>> dataset = load_dataset(\"sst\", \"default\")\r\n>>> dataset\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'tokens', 'tre...
2021-10-17T14:17:47
2021-11-09T16:42:29
2021-11-09T16:42:28
## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("sst", "default") ``` ## Actual results ---------------------------...
JTWang2000
https://github.com/huggingface/datasets/issues/3099
null
false
1,028,210,790
3,098
Push to hub capabilities for `Dataset` and `DatasetDict`
closed
[ "Thank you for your reviews! I should have addressed all of your comments, and I added a test to ensure that `private` datasets work correctly too. I have merged the changes in `huggingface_hub`, so the `main` branch can be installed now; and I will release v0.1.0 soon.\r\n\r\nAs blockers for this PR:\r\n- It's sti...
2021-10-17T04:12:44
2021-12-08T16:04:50
2021-11-24T11:25:36
This PR implements a `push_to_hub` method on `Dataset` and `DatasetDict`. This does not currently work in `IterableDatasetDict` nor `IterableDataset` as those are simple dicts and I would like your opinion on how you would like to implement this before going ahead and doing it. This implementation needs to be used w...
LysandreJik
https://github.com/huggingface/datasets/pull/3098
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3098", "html_url": "https://github.com/huggingface/datasets/pull/3098", "diff_url": "https://github.com/huggingface/datasets/pull/3098.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3098.patch", "merged_at": "2021-11-24T11:25...
true
1,027,750,811
3,097
`ModuleNotFoundError: No module named 'fsspec.exceptions'`
closed
[ "Thanks for reporting, @VictorSanh.\r\n\r\nI'm fixing it." ]
2021-10-15T19:34:38
2021-10-18T07:51:54
2021-10-18T07:51:54
## Describe the bug I keep runnig into a fsspec ModuleNotFound error ## Steps to reproduce the bug ```python >>> from datasets import get_dataset_infos 2021-10-15 15:25:37.863206: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudar...
VictorSanh
https://github.com/huggingface/datasets/issues/3097
null
false
1,027,535,685
3,096
Fix Audio feature mp3 resampling
closed
[]
2021-10-15T15:05:19
2021-10-15T15:38:30
2021-10-15T15:38:30
Issue #3095 is related to mp3 resampling, not to `cast_column`. This PR fixes Audio feature mp3 resampling. Fix #3095.
albertvillanova
https://github.com/huggingface/datasets/pull/3096
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3096", "html_url": "https://github.com/huggingface/datasets/pull/3096", "diff_url": "https://github.com/huggingface/datasets/pull/3096.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3096.patch", "merged_at": "2021-10-15T15:38...
true
1,027,453,146
3,095
`cast_column` makes audio decoding fail
closed
[ "cc @anton-l @albertvillanova ", "Thanks for reporting, @patrickvonplaten.\r\n\r\nI think the issue is related to mp3 resampling, not to `cast_column`.\r\n\r\nYou can check that `cast_column` works OK with non-mp3 audio files:\r\n```python\r\nfrom datasets import load_dataset\r\nimport datasets\r\nds = load_datas...
2021-10-15T13:36:58
2023-04-07T09:43:20
2021-10-15T15:38:30
## Describe the bug After changing the sampling rate automatic decoding fails. ## Steps to reproduce the bug ```python from datasets import load_dataset import datasets ds = load_dataset("common_voice", "ab", split="train") ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000)) pr...
patrickvonplaten
https://github.com/huggingface/datasets/issues/3095
null
false
1,027,328,633
3,094
Support loading a dataset from SQLite files
closed
[ "for reference Kaggle has a good number of open source datasets stored in sqlite\r\n\r\nAlternatively a tutorial or tool on how to convert from sqlite to parquet would be cool too", "Hello, could we leverage [`pandas.read_sql`](https://pandas.pydata.org/docs/reference/api/pandas.read_sql.html) for this? \r\n\r\nT...
2021-10-15T10:58:41
2022-10-03T16:32:29
2022-10-03T16:32:29
As requested by @julien-c, we could eventually support loading a dataset from SQLite files, like it is the case for JSON/CSV files.
albertvillanova
https://github.com/huggingface/datasets/issues/3094
null
false
1,027,262,124
3,093
Error loading json dataset with multiple splits if keys in nested dicts have a different order
closed
[ "Hi, \r\n\r\neven Pandas, which is less strict compared to PyArrow when it comes to reading JSON, doesn't support different orderings:\r\n```python\r\nimport io\r\nimport pandas as pd\r\n\r\ns = \"\"\"\r\n{\"a\": {\"c\": 8, \"b\": 5}}\r\n{\"a\": {\"b\": 7, \"c\": 6}}\r\n\"\"\"\r\n\r\nbuffer = io.StringIO(s)\r\ndf =...
2021-10-15T09:33:25
2022-04-10T14:06:29
2022-04-10T14:06:29
## Describe the bug Loading a json dataset with multiple splits that have nested dicts with keys in different order results in the error below. If the keys in the nested dicts always have the same order or even if you just load a single split in which the nested dicts don't have the same order, everything works fin...
dthulke
https://github.com/huggingface/datasets/issues/3093
null
false
1,027,260,383
3,092
Fix JNLBA dataset
closed
[ "Fix #3089.", "@albertvillanova all tests are passing now. Either you or @lhoestq can review it!" ]
2021-10-15T09:31:14
2022-07-10T14:36:49
2021-10-22T08:23:57
As mentioned in #3089, I've added more tags and also updated the link for dataset which was earlier using a Google Drive link. I'm having problem with generating dummy data as `datasets-cli dummy_data ./datasets/jnlpba --auto_generate --match_text_files "*.iob2"` is giving `datasets.keyhash.DuplicatedKeysError: FAIL...
bhavitvyamalik
https://github.com/huggingface/datasets/pull/3092
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3092", "html_url": "https://github.com/huggingface/datasets/pull/3092", "diff_url": "https://github.com/huggingface/datasets/pull/3092.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3092.patch", "merged_at": "2021-10-22T08:23...
true
1,027,251,530
3,091
`blog_authorship_corpus` is broken
closed
[ "Hi @fdtomasi, thanks for reporting.\r\n\r\nYou are right: the original host data URL does no longer exist.\r\n\r\nI've contacted the authors of the dataset to ask them if they host this dataset in another URL.", "Hi, @fdtomasi, the URL is fixed.\r\n\r\nThe fix is already in our master branch and it will be acces...
2021-10-15T09:20:40
2021-10-19T13:06:10
2021-10-19T12:50:39
## Describe the bug The dataset `blog_authorship_corpus` is broken. By bypassing the checksum checks, the loading does not return any error but the resulting dataset is empty. I suspect it is because the data download url is broken (http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip). ## Steps to reproduce the bug ...
fdtomasi
https://github.com/huggingface/datasets/issues/3091
null
false
1,027,100,371
3,090
Update BibTeX entry
closed
[]
2021-10-15T05:39:27
2021-10-15T07:35:57
2021-10-15T07:35:57
Update BibTeX entry.
albertvillanova
https://github.com/huggingface/datasets/pull/3090
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3090", "html_url": "https://github.com/huggingface/datasets/pull/3090", "diff_url": "https://github.com/huggingface/datasets/pull/3090.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3090.patch", "merged_at": "2021-10-15T07:35...
true
1,026,973,360
3,089
JNLPBA Dataset
closed
[ "# Steps to reproduce\r\n\r\nTo reproduce:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('jnlpba')\r\n\r\ndataset['train'].features['ner_tags']\r\n```\r\nOutput:\r\n```python\r\nSequence(feature=ClassLabel(num_classes=3, names=['O', 'B', 'I'], names_file=None, id=None), length=-1, ...
2021-10-15T01:16:02
2021-10-22T08:23:57
2021-10-22T08:23:57
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` ## Expected results The dataset loading script for this dataset is incorrect. This is a biomedical dataset used for named entity recognition. The entities in ...
sciarrilli
https://github.com/huggingface/datasets/issues/3089
null
false
1,026,920,369
3,088
Use template column_mapping to transmit_format instead of template features
closed
[ "Thanks for fixing!" ]
2021-10-14T23:49:40
2021-10-15T14:40:05
2021-10-15T10:11:04
Use `template.column_mapping` to check for modified columns since `template.features` represent a generic template/column mapping. Fix #3087 TODO: - [x] Add a test
mariosasko
https://github.com/huggingface/datasets/pull/3088
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3088", "html_url": "https://github.com/huggingface/datasets/pull/3088", "diff_url": "https://github.com/huggingface/datasets/pull/3088.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3088.patch", "merged_at": "2021-10-15T10:11...
true
1,026,780,469
3,087
Removing label column in a text classification dataset yields to errors
closed
[]
2021-10-14T20:12:50
2021-10-15T10:11:04
2021-10-15T10:11:04
## Describe the bug This looks like #3059 but it's not linked to the cache this time. Removing the `label` column from a text classification dataset and then performing any processing will result in an error. To reproduce: ```py from datasets import load_dataset from transformers import AutoTokenizer raw_da...
sgugger
https://github.com/huggingface/datasets/issues/3087
null
false
1,026,481,905
3,086
Remove _resampler from Audio fields
closed
[]
2021-10-14T14:38:50
2021-10-14T15:13:41
2021-10-14T15:13:40
The `_resampler` Audio attribute was implemented to optimize audio resampling, but it should not be cached. This PR removes `_resampler` from Audio fields, so that it is not returned by `fields()` or `asdict()`. Fix #3083.
albertvillanova
https://github.com/huggingface/datasets/pull/3086
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3086", "html_url": "https://github.com/huggingface/datasets/pull/3086", "diff_url": "https://github.com/huggingface/datasets/pull/3086.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3086.patch", "merged_at": "2021-10-14T15:13...
true
1,026,467,384
3,085
Fixes to `to_tf_dataset`
closed
[ "Hi ! Can you give some details about why you need these changes ?", "Hey, sorry, I should have explained! I've been getting a lot of `VisibleDeprecationWarning` from Numpy, due to an issue in the formatter, see #3084 . This is a temporary workaround (since I'm using these methods in the upcoming course) until I ...
2021-10-14T14:25:56
2021-10-21T15:05:29
2021-10-21T15:05:28
null
Rocketknight1
https://github.com/huggingface/datasets/pull/3085
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3085", "html_url": "https://github.com/huggingface/datasets/pull/3085", "diff_url": "https://github.com/huggingface/datasets/pull/3085.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3085.patch", "merged_at": "2021-10-21T15:05...
true
1,026,428,992
3,084
VisibleDeprecationWarning when using `set_format("numpy")`
closed
[ "I just opened a PR and I verified that the code you provided doesn't show any deprecation warning :)" ]
2021-10-14T13:53:01
2021-10-22T16:04:14
2021-10-22T16:04:14
Code to reproduce: ``` from datasets import load_dataset dataset = load_dataset("glue", "mnli") from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('distilbert-base-cased') def tokenize_function(dataset): return tokenizer(dataset['premise']) tokenized_datasets = dataset....
Rocketknight1
https://github.com/huggingface/datasets/issues/3084
null
false
1,026,397,062
3,083
Datasets with Audio feature raise error when loaded from cache due to _resampler parameter
closed
[]
2021-10-14T13:23:53
2021-10-14T15:13:40
2021-10-14T15:13:40
## Describe the bug As reported by @patrickvonplaten, when loaded from the cache, datasets containing the Audio feature raise TypeError. ## Steps to reproduce the bug ```python from datasets import load_dataset # load first time works ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean") # ...
albertvillanova
https://github.com/huggingface/datasets/issues/3083
null
false
1,026,388,994
3,082
Fix error related to huggingface_hub timeout parameter
closed
[]
2021-10-14T13:17:47
2021-10-14T14:39:52
2021-10-14T14:39:51
The `huggingface_hub` package added the parameter `timeout` from version 0.0.19. This PR bumps this minimal version. Fix #3080.
albertvillanova
https://github.com/huggingface/datasets/pull/3082
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3082", "html_url": "https://github.com/huggingface/datasets/pull/3082", "diff_url": "https://github.com/huggingface/datasets/pull/3082.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3082.patch", "merged_at": "2021-10-14T14:39...
true
1,026,383,749
3,081
[Audio datasets] Adapting all audio datasets
closed
[ "@lhoestq - are there other important speech datasets that I'm forgetting here? \r\n\r\nThink PR is good to go otherwise", "@lhoestq @albertvillanova - how can we make an exception for the AMI README so that the test doesn't fail? The dataset card definitely should have a data preprocessing section", "Hi @patri...
2021-10-14T13:13:45
2021-10-15T12:52:03
2021-10-15T12:22:33
This PR adds the new `Audio(...)` features - see: https://github.com/huggingface/datasets/pull/2324 to the most important audio datasets: - Librispeech - Timit - Common Voice - AMI - ... (others I'm forgetting now) The PR is curently blocked because the following leads to a problem: ```python from dataset...
patrickvonplaten
https://github.com/huggingface/datasets/pull/3081
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3081", "html_url": "https://github.com/huggingface/datasets/pull/3081", "diff_url": "https://github.com/huggingface/datasets/pull/3081.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3081.patch", "merged_at": "2021-10-15T12:22...
true
1,026,380,626
3,080
Error related to timeout keyword argument
closed
[]
2021-10-14T13:10:58
2021-10-14T14:39:51
2021-10-14T14:39:51
## Describe the bug As reported by @patrickvonplaten, a TypeError is raised when trying to load a dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean") ``` ## Actual results ``` TypeError: dataset_info() got ...
albertvillanova
https://github.com/huggingface/datasets/issues/3080
null
false
1,026,150,362
3,077
Fix loading a metric with internal import
closed
[]
2021-10-14T09:06:58
2021-10-14T09:14:56
2021-10-14T09:14:55
After refactoring the module factory (#2986), a bug was introduced when loading metrics with internal imports. This PR adds a new test case and fixes this bug. Fix #3076. CC: @sgugger @merveenoyan
albertvillanova
https://github.com/huggingface/datasets/pull/3077
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3077", "html_url": "https://github.com/huggingface/datasets/pull/3077", "diff_url": "https://github.com/huggingface/datasets/pull/3077.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3077.patch", "merged_at": "2021-10-14T09:14...
true
1,026,113,484
3,076
Error when loading a metric
closed
[]
2021-10-14T08:29:27
2021-10-14T09:14:55
2021-10-14T09:14:55
## Describe the bug As reported by @sgugger, after last release, exception is thrown when loading a metric. ## Steps to reproduce the bug ```python from datasets import load_metric metric = load_metric("squad_v2") ``` ## Actual results ``` FileNotFoundError Traceback (most recent ...
albertvillanova
https://github.com/huggingface/datasets/issues/3076
null
false
1,026,103,388
3,075
Updates LexGLUE and MultiEURLEX README.md files
closed
[]
2021-10-14T08:19:16
2021-10-18T10:13:40
2021-10-18T10:13:40
Updates LexGLUE and MultiEURLEX README.md files - Fix leaderboard in LexGLUE. - Fix an error in the CaseHOLD data example. - Turn MultiEURLEX dataset statistics table into HTML to nicely render in HF website.
iliaschalkidis
https://github.com/huggingface/datasets/pull/3075
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3075", "html_url": "https://github.com/huggingface/datasets/pull/3075", "diff_url": "https://github.com/huggingface/datasets/pull/3075.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3075.patch", "merged_at": "2021-10-18T10:13...
true
1,025,940,085
3,074
add XCSR dataset
closed
[ "> Hi ! Thanks for adding this dataset :)\r\n> \r\n> Do you know how the translations were done ? Maybe we can mention that in the dataset card.\r\n> \r\n> The rest looks all good to me :) good job with the dataset script and the dataset card !\r\n> \r\n> Just one thing: we try to have dummy_data.zip files that are...
2021-10-14T04:39:59
2021-11-08T13:52:36
2021-11-08T13:52:36
Hi, I wanted to add the [XCSR ](https://inklab.usc.edu//XCSR/xcsr_datasets) dataset to huggingface! :) I followed the instructions of adding new dataset to huggingface and have all the required files ready now! It would be super helpful if you can take a look and review them. Thanks in advance for your time and ...
yangxqiao
https://github.com/huggingface/datasets/pull/3074
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3074", "html_url": "https://github.com/huggingface/datasets/pull/3074", "diff_url": "https://github.com/huggingface/datasets/pull/3074.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3074.patch", "merged_at": "2021-11-08T13:52...
true
1,025,718,469
3,073
Import error installing with ppc64le
closed
[ "This seems to be an issue with importing PyArrow so I posted the problem [here](https://issues.apache.org/jira/browse/ARROW-14323), and I'm closing this issue.\r\n" ]
2021-10-13T21:37:23
2021-10-14T16:35:46
2021-10-14T16:33:28
## Describe the bug Installing the datasets library with a computer running with ppc64le seems to cause an issue when importing the datasets library. ``` python Python 3.6.13 | packaged by conda-forge | (default, Sep 23 2021, 07:37:44) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for...
gcervantes8
https://github.com/huggingface/datasets/issues/3073
null
false
1,025,233,152
3,072
Fix pathlib patches for streaming
closed
[]
2021-10-13T13:11:15
2021-10-13T13:31:05
2021-10-13T13:31:05
Fix issue https://github.com/huggingface/datasets/issues/2866 (for good this time) `counter` now works in both streaming and non-streaming mode. And the `AttributeError: 'str' object has no attribute 'as_posix'` related to the patch of Path.open is fixed as well Note : the patches should only affect the datasets...
lhoestq
https://github.com/huggingface/datasets/pull/3072
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3072", "html_url": "https://github.com/huggingface/datasets/pull/3072", "diff_url": "https://github.com/huggingface/datasets/pull/3072.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3072.patch", "merged_at": "2021-10-13T13:31...
true
1,024,893,493
3,071
Custom plain text dataset, plain json dataset and plain csv dataset are remove from datasets template folder
closed
[ "Hi @zixiliuUSC, \r\n\r\nAs explained in the documentation (https://huggingface.co/docs/datasets/loading.html#json), we support loading any dataset in JSON (as well as CSV, text, Parquet) format:\r\n```python\r\nds = load_dataset('json', data_files='my_file.json')\r\n```" ]
2021-10-13T07:32:10
2021-10-13T08:27:04
2021-10-13T08:27:03
## Adding a Dataset - **Name:** text, json, csv - **Description:** I am developing a customized dataset loading script. The problem is mainly about my custom dataset is seperate into many files and I only find a dataset loading template in [https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py](ht...
zixiliuUSC
https://github.com/huggingface/datasets/issues/3071
null
false
1,024,856,745
3,070
Fix Windows CI with FileNotFoundError when stting up s3_base fixture
closed
[ "Thanks ! Sorry for the inconvenience ^^' " ]
2021-10-13T06:49:01
2021-10-13T08:55:13
2021-10-13T06:49:48
Fix #3069.
albertvillanova
https://github.com/huggingface/datasets/pull/3070
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3070", "html_url": "https://github.com/huggingface/datasets/pull/3070", "diff_url": "https://github.com/huggingface/datasets/pull/3070.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3070.patch", "merged_at": "2021-10-13T06:49...
true