id int64 599M 3.48B | number int64 1 7.8k | title stringlengths 1 290 | state stringclasses 2
values | comments listlengths 0 30 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-10-05 06:37:50 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-10-05 10:32:43 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-10-01 13:56:03 ⌀ | body stringlengths 0 228k ⌀ | user stringlengths 3 26 | html_url stringlengths 46 51 | pull_request dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
890,296,262 | 2,353 | Update README vallidation rules | closed | [] | 2021-05-12T16:57:26 | 2021-05-14T08:56:06 | 2021-05-14T08:56:06 | This PR allows unexpected subsections under third-level headings. All except `Contributions`.
@lhoestq | gchhablani | https://github.com/huggingface/datasets/pull/2353 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2353",
"html_url": "https://github.com/huggingface/datasets/pull/2353",
"diff_url": "https://github.com/huggingface/datasets/pull/2353.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2353.patch",
"merged_at": "2021-05-14T08:56... | true |
889,810,100 | 2,352 | Set to_json default to JSON lines | closed | [] | 2021-05-12T08:19:25 | 2021-05-21T09:01:14 | 2021-05-21T09:01:13 | With this PR, the method `Dataset.to_json`:
- is added to the docs
- defaults to JSON lines | albertvillanova | https://github.com/huggingface/datasets/pull/2352 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2352",
"html_url": "https://github.com/huggingface/datasets/pull/2352",
"diff_url": "https://github.com/huggingface/datasets/pull/2352.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2352.patch",
"merged_at": "2021-05-21T09:01... | true |
889,584,953 | 2,351 | simpllify faiss index save | closed | [] | 2021-05-12T03:54:10 | 2021-05-17T13:41:41 | 2021-05-17T13:41:41 | Fixes #2350
In some cases, Faiss GPU index objects do not have neither "device" nor "getDevice". Possibly this happens when some part of the index is computed on CPU.
In particular, this would happen with the index `OPQ16_128,IVF512,PQ32` (issue #2350). I did check it, but it is likely that `OPQ` or `PQ` transfor... | Guitaricet | https://github.com/huggingface/datasets/pull/2351 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2351",
"html_url": "https://github.com/huggingface/datasets/pull/2351",
"diff_url": "https://github.com/huggingface/datasets/pull/2351.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2351.patch",
"merged_at": "2021-05-17T13:41... | true |
889,580,247 | 2,350 | `FaissIndex.save` throws error on GPU | closed | [] | 2021-05-12T03:41:56 | 2021-05-17T13:41:41 | 2021-05-17T13:41:41 | ## Describe the bug
After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error.
```
File "index_wikipedia.py", line 119, in <module>
data["train"].save_faiss_index("text_emb", index_save_path)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8... | Guitaricet | https://github.com/huggingface/datasets/issues/2350 | null | false |
888,586,018 | 2,349 | Update task_ids for Ascent KB | closed | [] | 2021-05-11T20:44:33 | 2021-05-17T10:53:14 | 2021-05-17T10:48:34 | This "other-other-knowledge-base" task is better suited for the dataset. | phongnt570 | https://github.com/huggingface/datasets/pull/2349 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2349",
"html_url": "https://github.com/huggingface/datasets/pull/2349",
"diff_url": "https://github.com/huggingface/datasets/pull/2349.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2349.patch",
"merged_at": "2021-05-17T10:48... | true |
887,927,737 | 2,348 | Add tests for dataset cards | closed | [] | 2021-05-11T17:14:27 | 2021-05-21T12:10:47 | 2021-05-21T12:10:47 | Adding tests for dataset cards
This PR will potentially remove the scripts being used for dataset tags and readme validation.
Additionally, this will allow testing dataset readmes by providing the name as follows:
```bash
pytest tests/test_dataset_cards.py::test_dataset_tags[fashion_mnist]
```
and
```bas... | gchhablani | https://github.com/huggingface/datasets/pull/2348 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2348",
"html_url": "https://github.com/huggingface/datasets/pull/2348",
"diff_url": "https://github.com/huggingface/datasets/pull/2348.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2348.patch",
"merged_at": "2021-05-21T12:10... | true |
887,404,868 | 2,347 | Add an API to access the language and pretty name of a dataset | closed | [] | 2021-05-11T14:10:08 | 2022-10-05T17:16:54 | 2022-10-05T17:16:53 | It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts. | sgugger | https://github.com/huggingface/datasets/issues/2347 | null | false |
886,632,114 | 2,346 | Add Qasper Dataset | closed | [] | 2021-05-11T09:25:44 | 2021-05-18T12:28:28 | 2021-05-18T12:28:28 | [Question Answering on Scientific Research Papers](https://allenai.org/project/qasper/home)
Doing NLP on NLP papers to do NLP ♻️ I had to add it~
- [x] Add README (just gotta fill out some more )
- [x] Dataloader code
- [x] Make dummy dataset
- [x] generate dataset infos
- [x] Tests
| cceyda | https://github.com/huggingface/datasets/pull/2346 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2346",
"html_url": "https://github.com/huggingface/datasets/pull/2346",
"diff_url": "https://github.com/huggingface/datasets/pull/2346.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2346.patch",
"merged_at": "2021-05-18T12:28... | true |
886,586,872 | 2,345 | [Question] How to move and reuse preprocessed dataset? | closed | [] | 2021-05-11T09:09:17 | 2021-06-11T04:39:11 | 2021-06-11T04:39:11 | Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_dir/"
but the program still re-preprocess the whole dataset... | AtmaHou | https://github.com/huggingface/datasets/issues/2345 | null | false |
885,331,505 | 2,344 | Is there a way to join multiple datasets in one? | open | [] | 2021-05-10T23:16:10 | 2022-10-05T17:27:05 | null | **Is your feature request related to a problem? Please describe.**
I need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2?
**Describe the solution you'd like**
Id like to join them with a merge or join method, just like pandas dataframes.
**Add... | avacaondata | https://github.com/huggingface/datasets/issues/2344 | null | false |
883,208,539 | 2,343 | Columns are removed before or after map function applied? | open | [] | 2021-05-10T02:36:20 | 2022-10-24T11:31:55 | null | ## Describe the bug
According to the documentation when applying map function the [remove_columns ](https://huggingface.co/docs/datasets/processing.html#removing-columns) will be removed after they are passed to the function, but in the [source code](https://huggingface.co/docs/datasets/package_reference/main_classes.... | taghizad3h | https://github.com/huggingface/datasets/issues/2343 | null | false |
882,981,420 | 2,342 | Docs - CER above 1 | closed | [] | 2021-05-09T23:41:00 | 2021-05-10T13:34:00 | 2021-05-10T13:34:00 | CER can actually be greater than 1. | borisdayma | https://github.com/huggingface/datasets/pull/2342 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2342",
"html_url": "https://github.com/huggingface/datasets/pull/2342",
"diff_url": "https://github.com/huggingface/datasets/pull/2342.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2342.patch",
"merged_at": "2021-05-10T13:34... | true |
882,370,933 | 2,341 | Added the Ascent KB | closed | [] | 2021-05-09T14:17:39 | 2021-05-11T09:16:59 | 2021-05-11T09:16:59 | Added the Ascent Commonsense KB of 8.9M assertions.
- Paper: [Advanced Semantics for Commonsense Knowledge Extraction (WWW'21)](https://arxiv.org/abs/2011.00905)
- Website: https://ascent.mpi-inf.mpg.de/
(I am the author of the dataset) | phongnt570 | https://github.com/huggingface/datasets/pull/2341 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2341",
"html_url": "https://github.com/huggingface/datasets/pull/2341",
"diff_url": "https://github.com/huggingface/datasets/pull/2341.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2341.patch",
"merged_at": "2021-05-11T09:16... | true |
882,370,824 | 2,340 | More consistent copy logic | closed | [] | 2021-05-09T14:17:33 | 2021-05-11T08:58:33 | 2021-05-11T08:58:33 | Use `info.copy()` instead of `copy.deepcopy(info)`.
`Features.copy` now creates a deep copy. | mariosasko | https://github.com/huggingface/datasets/pull/2340 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2340",
"html_url": "https://github.com/huggingface/datasets/pull/2340",
"diff_url": "https://github.com/huggingface/datasets/pull/2340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2340.patch",
"merged_at": "2021-05-11T08:58... | true |
882,046,077 | 2,338 | fixed download link for web_science | closed | [] | 2021-05-09T09:12:20 | 2021-05-10T13:35:53 | 2021-05-10T13:35:53 | Fixes #2337. Should work with:
`dataset = load_dataset("web_of_science", "WOS11967", ignore_verifications=True)` | bhavitvyamalik | https://github.com/huggingface/datasets/pull/2338 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2338",
"html_url": "https://github.com/huggingface/datasets/pull/2338",
"diff_url": "https://github.com/huggingface/datasets/pull/2338.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2338.patch",
"merged_at": "2021-05-10T13:35... | true |
881,610,567 | 2,337 | NonMatchingChecksumError for web_of_science dataset | closed | [] | 2021-05-09T02:02:02 | 2021-05-10T13:35:53 | 2021-05-10T13:35:53 | NonMatchingChecksumError when trying to download the web_of_science dataset.
>NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://data.mendeley.com/datasets/9rw3vkcfy4/6/files/c9ea673d-5542-44c0-ab7b-f1311f7d61df/WebOfScience.zip?dl=1']
Setting `ignore_verfications=True` results... | nbroad1881 | https://github.com/huggingface/datasets/issues/2337 | null | false |
881,298,783 | 2,336 | Fix overflow issue in interpolation search | closed | [] | 2021-05-08T20:51:36 | 2021-05-10T13:29:07 | 2021-05-10T13:26:12 | Fixes #2335
More info about this error can be found [here](https://stackoverflow.com/questions/53239890/why-do-i-keep-getting-this-error-runtimewarning-overflow-encountered-in-int-sc/53240100). | mariosasko | https://github.com/huggingface/datasets/pull/2336 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2336",
"html_url": "https://github.com/huggingface/datasets/pull/2336",
"diff_url": "https://github.com/huggingface/datasets/pull/2336.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2336.patch",
"merged_at": "2021-05-10T13:26... | true |
881,291,887 | 2,335 | Index error in Dataset.map | closed | [] | 2021-05-08T20:44:57 | 2021-05-10T13:26:12 | 2021-05-10T13:26:12 | The following code, if executed on master, raises an IndexError (due to overflow):
```python
>>> from datasets import *
>>> d = load_dataset("bookcorpus", split="train")
Reusing dataset bookcorpus (C:\Users\Mario\.cache\huggingface\datasets\bookcorpus\plain_text\1.0.0\44662c4a114441c35200992bea923b170e6f13f2f0beb7c... | mariosasko | https://github.com/huggingface/datasets/issues/2335 | null | false |
879,810,107 | 2,334 | Updating the DART file checksums in GEM | closed | [] | 2021-05-07T21:53:44 | 2021-05-07T22:18:10 | 2021-05-07T22:18:10 | The DART files were just updated on the source GitHub
https://github.com/Yale-LILY/dart/commit/34b3c872da4811523e334f1631e54ca8105dffab | yjernite | https://github.com/huggingface/datasets/pull/2334 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2334",
"html_url": "https://github.com/huggingface/datasets/pull/2334",
"diff_url": "https://github.com/huggingface/datasets/pull/2334.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2334.patch",
"merged_at": "2021-05-07T22:18... | true |
879,214,067 | 2,333 | Fix duplicate keys | closed | [] | 2021-05-07T15:28:08 | 2021-05-08T21:47:31 | 2021-05-07T15:57:08 | As noticed in https://github.com/huggingface/datasets/pull/2245, many datasets yield duplicate keys.
Most of the time it was because the counter used for ids were reset at each new data file. | lhoestq | https://github.com/huggingface/datasets/pull/2333 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2333",
"html_url": "https://github.com/huggingface/datasets/pull/2333",
"diff_url": "https://github.com/huggingface/datasets/pull/2333.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2333.patch",
"merged_at": "2021-05-07T15:57... | true |
879,041,608 | 2,332 | Add note about indices mapping in save_to_disk docstring | closed | [] | 2021-05-07T13:49:42 | 2021-05-07T17:20:48 | 2021-05-07T17:20:48 | lhoestq | https://github.com/huggingface/datasets/pull/2332 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2332",
"html_url": "https://github.com/huggingface/datasets/pull/2332",
"diff_url": "https://github.com/huggingface/datasets/pull/2332.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2332.patch",
"merged_at": "2021-05-07T17:20... | true | |
879,031,427 | 2,331 | Add Topical-Chat | open | [] | 2021-05-07T13:43:59 | 2021-05-07T13:43:59 | null | ## Adding a Dataset
- **Name:** Topical-Chat
- **Description:** a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don’t have explicitly defined roles
- **Paper:** https://www.isca-speech.org/archive/Interspeech_2019/pdfs/3079.pdf
- **... | ktangri | https://github.com/huggingface/datasets/issues/2331 | null | false |
878,490,927 | 2,330 | Allow passing `desc` to `tqdm` in `Dataset.map()` | closed | [] | 2021-05-07T05:52:54 | 2021-05-26T14:59:21 | 2021-05-26T14:59:21 | It's normal to have many `map()` calls, and some of them can take a few minutes,
it would be nice to have a description on the progress bar.
Alternative solution:
Print the description before/after the `map()` call. | changjonathanc | https://github.com/huggingface/datasets/issues/2330 | null | false |
877,924,198 | 2,329 | Add cache dir for in-memory datasets | closed | [] | 2021-05-06T19:35:32 | 2021-06-08T19:46:48 | 2021-06-08T19:06:46 | Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq.
Should fix #2322 | mariosasko | https://github.com/huggingface/datasets/pull/2329 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2329",
"html_url": "https://github.com/huggingface/datasets/pull/2329",
"diff_url": "https://github.com/huggingface/datasets/pull/2329.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2329.patch",
"merged_at": null
} | true |
877,673,896 | 2,328 | Add Matthews/Pearson/Spearman correlation metrics | closed | [] | 2021-05-06T16:09:27 | 2021-05-06T16:58:10 | 2021-05-06T16:58:10 | Added three metrics:
- The Matthews correlation coefficient (from sklearn)
- The Pearson correlation coefficient (from scipy)
- The Spearman correlation coefficient (from scipy)
cc @sgugger | lhoestq | https://github.com/huggingface/datasets/pull/2328 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2328",
"html_url": "https://github.com/huggingface/datasets/pull/2328",
"diff_url": "https://github.com/huggingface/datasets/pull/2328.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2328.patch",
"merged_at": "2021-05-06T16:58... | true |
877,565,831 | 2,327 | A syntax error in example | closed | [] | 2021-05-06T14:34:44 | 2021-05-20T03:04:19 | 2021-05-20T03:04:19 | 
Sorry to report with an image, I can't find the template source code of this snippet. | mymusise | https://github.com/huggingface/datasets/issues/2327 | null | false |
876,829,254 | 2,326 | Enable auto-download for PAN-X / Wikiann domain in XTREME | closed | [] | 2021-05-05T20:58:38 | 2021-05-07T08:41:10 | 2021-05-07T08:41:10 | This PR replaces the manual download of the `PAN-X.lang` domains with an auto-download from a Dropbox link provided by the Wikiann author. We also add the relevant dummy data for these domains.
While re-generating `dataset_infos.json` I ran into a `KeyError` in the `udpos.Arabic` domain so have included a fix for th... | lewtun | https://github.com/huggingface/datasets/pull/2326 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2326",
"html_url": "https://github.com/huggingface/datasets/pull/2326",
"diff_url": "https://github.com/huggingface/datasets/pull/2326.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2326.patch",
"merged_at": "2021-05-07T08:41... | true |
876,653,121 | 2,325 | Added the HLGD dataset | closed | [] | 2021-05-05T16:53:29 | 2021-05-12T14:55:13 | 2021-05-12T14:16:38 | Added the Headline Grouping Dataset (HLGD), from the NAACL2021 paper: News Headline Grouping as a Challenging NLU Task
Dataset Link: https://github.com/tingofurro/headline_grouping
Paper link: https://people.eecs.berkeley.edu/~phillab/pdfs/NAACL2021_HLG.pdf | tingofurro | https://github.com/huggingface/datasets/pull/2325 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2325",
"html_url": "https://github.com/huggingface/datasets/pull/2325",
"diff_url": "https://github.com/huggingface/datasets/pull/2325.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2325.patch",
"merged_at": "2021-05-12T14:16... | true |
876,602,064 | 2,324 | Create Audio feature | closed | [] | 2021-05-05T15:55:22 | 2021-10-13T10:26:33 | 2021-10-13T10:26:33 | Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanc... | albertvillanova | https://github.com/huggingface/datasets/pull/2324 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2324",
"html_url": "https://github.com/huggingface/datasets/pull/2324",
"diff_url": "https://github.com/huggingface/datasets/pull/2324.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2324.patch",
"merged_at": "2021-10-13T10:26... | true |
876,438,507 | 2,323 | load_dataset("timit_asr") gives back duplicates of just one sample text | closed | [] | 2021-05-05T13:14:48 | 2021-05-07T10:32:30 | 2021-05-07T10:32:30 | ## Describe the bug
When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and then ['text'], the list is one sentence repeated "The bungalow was pleasant... | ekeleshian | https://github.com/huggingface/datasets/issues/2323 | null | false |
876,383,853 | 2,322 | Calls to map are not cached. | closed | [] | 2021-05-05T12:11:27 | 2021-06-08T19:10:02 | 2021-06-08T19:08:21 | ## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):
print("executed", i[:10])... | villmow | https://github.com/huggingface/datasets/issues/2322 | null | false |
876,304,364 | 2,321 | Set encoding in OSCAR dataset | closed | [] | 2021-05-05T10:27:03 | 2021-05-05T10:50:55 | 2021-05-05T10:50:55 | Set explicit `utf-8` encoding in OSCAR dataset, to avoid using the system default `cp1252` on Windows platforms.
Fix #2319. | albertvillanova | https://github.com/huggingface/datasets/pull/2321 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2321",
"html_url": "https://github.com/huggingface/datasets/pull/2321",
"diff_url": "https://github.com/huggingface/datasets/pull/2321.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2321.patch",
"merged_at": "2021-05-05T10:50... | true |
876,257,026 | 2,320 | Set default name in init_dynamic_modules | closed | [] | 2021-05-05T09:30:03 | 2021-05-06T07:57:54 | 2021-05-06T07:57:54 | Set default value for the name of dynamic modules.
Close #2318. | albertvillanova | https://github.com/huggingface/datasets/pull/2320 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2320",
"html_url": "https://github.com/huggingface/datasets/pull/2320",
"diff_url": "https://github.com/huggingface/datasets/pull/2320.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2320.patch",
"merged_at": "2021-05-06T07:57... | true |
876,251,376 | 2,319 | UnicodeDecodeError for OSCAR (Afrikaans) | closed | [] | 2021-05-05T09:22:52 | 2021-05-05T10:57:31 | 2021-05-05T10:50:55 | ## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
```... | sgraaf | https://github.com/huggingface/datasets/issues/2319 | null | false |
876,212,460 | 2,318 | [api request] API to obtain "dataset_module" dynamic path? | closed | [] | 2021-05-05T08:40:48 | 2021-05-06T08:45:45 | 2021-05-06T07:57:54 | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
This is an awesome library.
It seems like the dynamic module path in this library has broken some of hyperparameter tuning functionality: https://discuss.huggingface.co/t/using-hyperparamet... | richardliaw | https://github.com/huggingface/datasets/issues/2318 | null | false |
875,767,318 | 2,317 | Fix incorrect version specification for the pyarrow package | closed | [] | 2021-05-04T19:30:20 | 2021-05-05T10:09:16 | 2021-05-05T09:21:58 | This PR addresses the bug in the pyarrow version specification, which is detailed in #2316 .
Simply, I put a comma between the version bounds.
Fix #2316. | cemilcengiz | https://github.com/huggingface/datasets/pull/2317 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2317",
"html_url": "https://github.com/huggingface/datasets/pull/2317",
"diff_url": "https://github.com/huggingface/datasets/pull/2317.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2317.patch",
"merged_at": "2021-05-05T09:21... | true |
875,756,353 | 2,316 | Incorrect version specification for pyarrow | closed | [] | 2021-05-04T19:15:11 | 2021-05-05T10:10:03 | 2021-05-05T10:10:03 | ## Describe the bug
The pyarrow dependency is incorrectly specified in setup.py file, in [this line](https://github.com/huggingface/datasets/blob/3a3e5a4da20bfcd75f8b6a6869b240af8feccc12/setup.py#L77).
Also as a snippet:
```python
"pyarrow>=1.0.0<4.0.0",
```
## Steps to reproduce the bug
```bash
pip install... | cemilcengiz | https://github.com/huggingface/datasets/issues/2316 | null | false |
875,742,200 | 2,315 | Datasets cli improvements | closed | [] | 2021-05-04T18:55:11 | 2021-05-10T16:36:51 | 2021-05-10T16:36:50 | This PR:
* replaces the code from the `bug_report.md` that was used to get relevant system info with a dedicated command (a more elegant approach than copy-pasting the code IMO)
* removes the `download` command (copied from the transformers repo?)
* adds missing help messages to the cli commands
| mariosasko | https://github.com/huggingface/datasets/pull/2315 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2315",
"html_url": "https://github.com/huggingface/datasets/pull/2315",
"diff_url": "https://github.com/huggingface/datasets/pull/2315.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2315.patch",
"merged_at": "2021-05-10T16:36... | true |
875,729,271 | 2,314 | Minor refactor prepare_module | closed | [] | 2021-05-04T18:37:26 | 2021-10-13T09:07:34 | 2021-10-13T09:07:34 | Start to refactor `prepare_module` to try to decouple functionality.
This PR does:
- extract function `_initialize_dynamic_modules_namespace_package`
- extract function `_find_module_in_github_or_s3`
- some renaming of variables
- use of f-strings | albertvillanova | https://github.com/huggingface/datasets/pull/2314 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2314",
"html_url": "https://github.com/huggingface/datasets/pull/2314",
"diff_url": "https://github.com/huggingface/datasets/pull/2314.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2314.patch",
"merged_at": null
} | true |
875,475,367 | 2,313 | Remove unused head_hf_s3 function | closed | [] | 2021-05-04T13:42:06 | 2021-05-07T09:31:42 | 2021-05-07T09:31:42 | Currently, the function `head_hf_s3` is not used:
- neither its returned result is used
- nor it raises any exception, as exceptions are catched and returned (not raised)
This PR removes it. | albertvillanova | https://github.com/huggingface/datasets/pull/2313 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2313",
"html_url": "https://github.com/huggingface/datasets/pull/2313",
"diff_url": "https://github.com/huggingface/datasets/pull/2313.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2313.patch",
"merged_at": null
} | true |
875,435,726 | 2,312 | Add rename_columnS method | closed | [] | 2021-05-04T12:57:53 | 2021-05-04T13:43:13 | 2021-05-04T13:43:12 | Cherry-picked from #2255 | SBrandeis | https://github.com/huggingface/datasets/pull/2312 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2312",
"html_url": "https://github.com/huggingface/datasets/pull/2312",
"diff_url": "https://github.com/huggingface/datasets/pull/2312.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2312.patch",
"merged_at": "2021-05-04T13:43... | true |
875,262,208 | 2,311 | Add SLR52, SLR53 and SLR54 to OpenSLR | closed | [] | 2021-05-04T09:08:03 | 2021-05-07T09:50:55 | 2021-05-07T09:50:55 | Add large speech datasets for Sinhala, Bengali and Nepali. | cahya-wirawan | https://github.com/huggingface/datasets/pull/2311 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2311",
"html_url": "https://github.com/huggingface/datasets/pull/2311",
"diff_url": "https://github.com/huggingface/datasets/pull/2311.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2311.patch",
"merged_at": "2021-05-07T09:50... | true |
875,096,051 | 2,310 | Update README.md | closed | [] | 2021-05-04T04:38:01 | 2022-07-06T15:19:58 | 2022-07-06T15:19:58 | Provides description of data instances and dataset features | cryoff | https://github.com/huggingface/datasets/pull/2310 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2310",
"html_url": "https://github.com/huggingface/datasets/pull/2310",
"diff_url": "https://github.com/huggingface/datasets/pull/2310.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2310.patch",
"merged_at": null
} | true |
874,644,990 | 2,309 | Fix conda release | closed | [] | 2021-05-03T14:52:59 | 2021-05-03T16:01:17 | 2021-05-03T16:01:17 | There were a few issues with conda releases (they've been failing for a while now).
To fix this I had to:
- add the --single-version-externally-managed tag to the build stage (suggestion from [here](https://stackoverflow.com/a/64825075))
- set the python version of the conda build stage to 3.8 since 3.9 isn't suppor... | lhoestq | https://github.com/huggingface/datasets/pull/2309 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2309",
"html_url": "https://github.com/huggingface/datasets/pull/2309",
"diff_url": "https://github.com/huggingface/datasets/pull/2309.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2309.patch",
"merged_at": "2021-05-03T16:01... | true |
873,961,435 | 2,302 | Add SubjQA dataset | closed | [] | 2021-05-02T14:51:20 | 2021-05-10T09:21:19 | 2021-05-10T09:21:19 | Hello datasetters 🙂!
Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance).
I f... | lewtun | https://github.com/huggingface/datasets/pull/2302 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2302",
"html_url": "https://github.com/huggingface/datasets/pull/2302",
"diff_url": "https://github.com/huggingface/datasets/pull/2302.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2302.patch",
"merged_at": "2021-05-10T09:21... | true |
873,941,266 | 2,301 | Unable to setup dev env on Windows | closed | [] | 2021-05-02T13:20:42 | 2021-05-03T15:18:01 | 2021-05-03T15:17:34 | Hi
I tried installing the `".[dev]"` version on Windows 10 after cloning.
Here is the error I'm facing:
```bat
(env) C:\testing\datasets>pip install -e ".[dev]"
Obtaining file:///C:/testing/datasets
Requirement already satisfied: numpy>=1.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datas... | gchhablani | https://github.com/huggingface/datasets/issues/2301 | null | false |
873,928,169 | 2,300 | Add VoxPopuli | closed | [] | 2021-05-02T12:17:40 | 2023-02-28T17:43:52 | 2023-02-28T17:43:51 | ## Adding a Dataset
- **Name:** Voxpopuli
- **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings
- **Paper:** https://arxiv.org/abs/2101.00390
- **Data:** https://github.com/facebookresearch/voxpopuli
- **Motivation:** biggest unlabeled speech dataset
**Note**:... | patrickvonplaten | https://github.com/huggingface/datasets/issues/2300 | null | false |
873,914,717 | 2,299 | My iPhone | closed | [] | 2021-05-02T11:11:11 | 2021-07-23T09:24:16 | 2021-05-03T08:17:38 | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | Jasonbuchanan1983 | https://github.com/huggingface/datasets/issues/2299 | null | false |
873,771,942 | 2,298 | Mapping in the distributed setting | closed | [] | 2021-05-01T21:23:05 | 2021-05-03T13:54:53 | 2021-05-03T13:54:53 | The barrier trick for distributed mapping as discussed on Thursday with @lhoestq | TevenLeScao | https://github.com/huggingface/datasets/pull/2298 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2298",
"html_url": "https://github.com/huggingface/datasets/pull/2298",
"diff_url": "https://github.com/huggingface/datasets/pull/2298.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2298.patch",
"merged_at": "2021-05-03T13:54... | true |
872,974,907 | 2,296 | 1 | closed | [] | 2021-04-30T17:53:49 | 2021-05-03T08:17:31 | 2021-05-03T08:17:31 | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | zinnyi | https://github.com/huggingface/datasets/issues/2296 | null | false |
872,902,867 | 2,295 | Create ExtractManager | closed | [] | 2021-04-30T17:13:34 | 2021-07-12T14:12:03 | 2021-07-08T08:11:49 | Perform refactoring to decouple extract functionality. | albertvillanova | https://github.com/huggingface/datasets/pull/2295 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2295",
"html_url": "https://github.com/huggingface/datasets/pull/2295",
"diff_url": "https://github.com/huggingface/datasets/pull/2295.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2295.patch",
"merged_at": "2021-07-08T08:11... | true |
872,136,075 | 2,294 | Slow #0 when using map to tokenize. | open | [] | 2021-04-30T08:00:33 | 2021-05-04T11:00:11 | null | Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
loa... | VerdureChen | https://github.com/huggingface/datasets/issues/2294 | null | false |
872,079,385 | 2,293 | imdb dataset from Don't Stop Pretraining Paper | closed | [] | 2021-04-30T06:40:48 | 2021-04-30T06:54:25 | 2021-04-30T06:54:25 | BobbyManion | https://github.com/huggingface/datasets/pull/2293 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2293",
"html_url": "https://github.com/huggingface/datasets/pull/2293",
"diff_url": "https://github.com/huggingface/datasets/pull/2293.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2293.patch",
"merged_at": null
} | true | |
871,230,183 | 2,292 | Fixed typo seperate->separate | closed | [] | 2021-04-29T16:40:53 | 2021-04-30T13:29:18 | 2021-04-30T13:03:12 | laksh9950 | https://github.com/huggingface/datasets/pull/2292 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2292",
"html_url": "https://github.com/huggingface/datasets/pull/2292",
"diff_url": "https://github.com/huggingface/datasets/pull/2292.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2292.patch",
"merged_at": "2021-04-30T13:03... | true | |
871,216,757 | 2,291 | Don't copy recordbatches in memory during a table deepcopy | closed | [] | 2021-04-29T16:26:05 | 2021-04-29T16:34:35 | 2021-04-29T16:34:34 | Fix issue #2276 and hopefully #2134
The recordbatches of the `IndexedTableMixin` used to speed up queries to the table were copied in memory during a table deepcopy.
This resulted in `concatenate_datasets`, `load_from_disk` and other methods to always bring the data in memory.
I fixed the copy similarly to #2287... | lhoestq | https://github.com/huggingface/datasets/pull/2291 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2291",
"html_url": "https://github.com/huggingface/datasets/pull/2291",
"diff_url": "https://github.com/huggingface/datasets/pull/2291.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2291.patch",
"merged_at": "2021-04-29T16:34... | true |
871,145,817 | 2,290 | Bbaw egyptian | closed | [] | 2021-04-29T15:27:58 | 2021-05-06T17:25:25 | 2021-05-06T17:25:25 | This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :... | phiwi | https://github.com/huggingface/datasets/pull/2290 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2290",
"html_url": "https://github.com/huggingface/datasets/pull/2290",
"diff_url": "https://github.com/huggingface/datasets/pull/2290.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2290.patch",
"merged_at": "2021-05-06T17:25... | true |
871,118,573 | 2,289 | Allow collaborators to self-assign issues | closed | [] | 2021-04-29T15:07:06 | 2021-04-30T18:28:16 | 2021-04-30T18:28:16 | Allow collaborators (without write access to the repository) to self-assign issues.
In order to self-assign an issue, they have to comment it with the word: `#take` or `#self-assign`. | albertvillanova | https://github.com/huggingface/datasets/pull/2289 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2289",
"html_url": "https://github.com/huggingface/datasets/pull/2289",
"diff_url": "https://github.com/huggingface/datasets/pull/2289.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2289.patch",
"merged_at": "2021-04-30T18:28... | true |
871,111,235 | 2,288 | Load_dataset for local CSV files | closed | [] | 2021-04-29T15:01:10 | 2021-06-15T13:49:26 | 2021-06-15T13:49:26 | The method load_dataset fails to correctly load a dataset from csv.
Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings.
row example:
```tokens | labels
['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ]
``... | sstojanoska | https://github.com/huggingface/datasets/issues/2288 | null | false |
871,063,374 | 2,287 | Avoid copying table's record batches | closed | [] | 2021-04-29T14:15:01 | 2021-04-29T16:34:23 | 2021-04-29T16:34:22 | Fixes #2276 | mariosasko | https://github.com/huggingface/datasets/pull/2287 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2287",
"html_url": "https://github.com/huggingface/datasets/pull/2287",
"diff_url": "https://github.com/huggingface/datasets/pull/2287.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2287.patch",
"merged_at": null
} | true |
871,032,393 | 2,286 | Fix metadata validation with config names | closed | [] | 2021-04-29T13:44:32 | 2021-04-29T14:07:29 | 2021-04-29T14:07:28 | I noticed in https://github.com/huggingface/datasets/pull/2280 that the metadata validator doesn't parse the tags in the readme properly when then contain the tags per config. | lhoestq | https://github.com/huggingface/datasets/pull/2286 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2286",
"html_url": "https://github.com/huggingface/datasets/pull/2286",
"diff_url": "https://github.com/huggingface/datasets/pull/2286.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2286.patch",
"merged_at": "2021-04-29T14:07... | true |
871,005,236 | 2,285 | Help understanding how to build a dataset for language modeling as with the old TextDataset | closed | [] | 2021-04-29T13:16:45 | 2021-05-19T07:22:45 | 2021-05-19T07:22:39 | Hello,
I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers.
I would like to understand what is the process to build a text datas... | danieldiezmallo | https://github.com/huggingface/datasets/issues/2285 | null | false |
870,932,710 | 2,284 | Initialize Imdb dataset as used in Don't Stop Pretraining Paper | closed | [] | 2021-04-29T11:52:38 | 2021-04-29T12:54:34 | 2021-04-29T12:54:34 | BobbyManion | https://github.com/huggingface/datasets/pull/2284 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2284",
"html_url": "https://github.com/huggingface/datasets/pull/2284",
"diff_url": "https://github.com/huggingface/datasets/pull/2284.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2284.patch",
"merged_at": null
} | true | |
870,926,475 | 2,283 | Initialize imdb dataset from don't stop pretraining paper | closed | [] | 2021-04-29T11:44:54 | 2021-04-29T11:50:24 | 2021-04-29T11:50:24 | BobbyManion | https://github.com/huggingface/datasets/pull/2283 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2283",
"html_url": "https://github.com/huggingface/datasets/pull/2283",
"diff_url": "https://github.com/huggingface/datasets/pull/2283.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2283.patch",
"merged_at": null
} | true | |
870,900,332 | 2,282 | Initialize imdb dataset from don't stop pretraining paper | closed | [] | 2021-04-29T11:17:56 | 2021-04-29T11:43:51 | 2021-04-29T11:43:51 | BobbyManion | https://github.com/huggingface/datasets/pull/2282 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2282",
"html_url": "https://github.com/huggingface/datasets/pull/2282",
"diff_url": "https://github.com/huggingface/datasets/pull/2282.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2282.patch",
"merged_at": null
} | true | |
870,792,784 | 2,281 | Update multi_woz_v22 checksum | closed | [] | 2021-04-29T09:09:11 | 2021-04-29T13:41:35 | 2021-04-29T13:41:34 | Fix issue https://github.com/huggingface/datasets/issues/1876
The files were changed in https://github.com/budzianowski/multiwoz/pull/72 | lhoestq | https://github.com/huggingface/datasets/pull/2281 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2281",
"html_url": "https://github.com/huggingface/datasets/pull/2281",
"diff_url": "https://github.com/huggingface/datasets/pull/2281.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2281.patch",
"merged_at": "2021-04-29T13:41... | true |
870,780,431 | 2,280 | Fixed typo seperate->separate | closed | [] | 2021-04-29T08:55:46 | 2021-04-29T16:41:22 | 2021-04-29T16:41:16 | laksh9950 | https://github.com/huggingface/datasets/pull/2280 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2280",
"html_url": "https://github.com/huggingface/datasets/pull/2280",
"diff_url": "https://github.com/huggingface/datasets/pull/2280.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2280.patch",
"merged_at": null
} | true | |
870,431,662 | 2,279 | Compatibility with Ubuntu 18 and GLIBC 2.27? | closed | [] | 2021-04-28T22:08:07 | 2021-04-29T07:42:42 | 2021-04-29T07:42:42 | ## Describe the bug
For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-29-or-higher-in-ubuntu-18-04).
I'm not sure... | tginart | https://github.com/huggingface/datasets/issues/2279 | null | false |
870,088,059 | 2,278 | Loss result inGptNeoForCasual | closed | [] | 2021-04-28T15:39:52 | 2021-05-06T16:14:23 | 2021-05-06T16:14:23 | Is there any way you give the " loss" and "logits" results in the gpt neo api? | Yossillamm | https://github.com/huggingface/datasets/issues/2278 | null | false |
870,071,994 | 2,277 | Create CacheManager | open | [] | 2021-04-28T15:23:42 | 2022-07-06T15:19:48 | null | Perform refactoring to decouple cache functionality (method `as_dataset`). | albertvillanova | https://github.com/huggingface/datasets/pull/2277 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2277",
"html_url": "https://github.com/huggingface/datasets/pull/2277",
"diff_url": "https://github.com/huggingface/datasets/pull/2277.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2277.patch",
"merged_at": null
} | true |
870,010,511 | 2,276 | concatenate_datasets loads all the data into memory | closed | [] | 2021-04-28T14:27:21 | 2021-05-03T08:41:55 | 2021-05-03T08:41:55 | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
 and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset viewer. As examples, see index 107... | puzzler10 | https://github.com/huggingface/datasets/issues/2275 | null | false |
869,186,276 | 2,274 | Always update metadata in arrow schema | closed | [] | 2021-04-27T19:21:57 | 2022-06-03T08:31:19 | 2021-04-29T09:57:50 | We store a redundant copy of the features in the metadata of the schema of the arrow table. This is used to recover the features when doing `Dataset.from_file`. These metadata are updated after each transfor, that changes the feature types.
For each function that transforms the feature types of the dataset, I added ... | lhoestq | https://github.com/huggingface/datasets/pull/2274 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2274",
"html_url": "https://github.com/huggingface/datasets/pull/2274",
"diff_url": "https://github.com/huggingface/datasets/pull/2274.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2274.patch",
"merged_at": "2021-04-29T09:57... | true |
869,046,290 | 2,273 | Added CUAD metrics | closed | [] | 2021-04-27T16:49:12 | 2021-04-29T13:59:47 | 2021-04-29T13:59:47 | `EM`, `F1`, `AUPR`, `Precision@80%Recall`, and `Precision@90%Recall` metrics supported for CUAD | bhavitvyamalik | https://github.com/huggingface/datasets/pull/2273 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2273",
"html_url": "https://github.com/huggingface/datasets/pull/2273",
"diff_url": "https://github.com/huggingface/datasets/pull/2273.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2273.patch",
"merged_at": "2021-04-29T13:59... | true |
869,017,977 | 2,272 | Bug in Dataset.class_encode_column | closed | [] | 2021-04-27T16:13:18 | 2021-04-30T12:54:27 | 2021-04-30T12:54:27 | ## Describe the bug
All the rest of the columns except the one passed to `Dataset.class_encode_column` are discarded.
## Expected results
All the original columns should be kept.
This needs regression tests.
| albertvillanova | https://github.com/huggingface/datasets/issues/2272 | null | false |
869,002,141 | 2,271 | Synchronize table metadata with features | closed | [] | 2021-04-27T15:55:13 | 2022-06-01T17:13:21 | 2022-06-01T17:13:21 | **Is your feature request related to a problem? Please describe.**
As pointed out in this [comment](https://github.com/huggingface/datasets/pull/2145#discussion_r621326767):
> Metadata stored in the schema is just a redundant information regarding the feature types.
It is used when calling Dataset.from_file to kno... | albertvillanova | https://github.com/huggingface/datasets/issues/2271 | null | false |
868,913,660 | 2,270 | Fix iterable interface expected by numpy | closed | [] | 2021-04-27T14:35:56 | 2021-04-28T17:39:27 | 2021-04-28T17:39:27 | Numpy expects the old iterable interface with `__getitem__` instead of `__iter__`. | albertvillanova | https://github.com/huggingface/datasets/pull/2270 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2270",
"html_url": "https://github.com/huggingface/datasets/pull/2270",
"diff_url": "https://github.com/huggingface/datasets/pull/2270.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2270.patch",
"merged_at": null
} | true |
868,878,468 | 2,269 | Fix query table with iterable | closed | [] | 2021-04-27T13:59:38 | 2021-04-27T14:21:57 | 2021-04-27T14:21:56 | The benchmark runs are failing on master because it tries to use an iterable to query the dataset.
However there's currently an issue caused by the use of `np.array` instead of `np.fromiter` on the iterable.
This PR fixes it | lhoestq | https://github.com/huggingface/datasets/pull/2269 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2269",
"html_url": "https://github.com/huggingface/datasets/pull/2269",
"diff_url": "https://github.com/huggingface/datasets/pull/2269.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2269.patch",
"merged_at": "2021-04-27T14:21... | true |
868,773,380 | 2,268 | Don't use pyarrow 4.0.0 since it segfaults when casting a sliced ListArray of integers | closed | [] | 2021-04-27T11:58:28 | 2021-06-12T12:44:49 | 2021-04-27T13:43:20 | This test `tests/test_table.py::test_concatenation_table_cast` segfaults with the latest update of pyarrow 4.0.0.
Setting `pyarrow<4.0.0` for now. I'll open an issue on JIRA once I know more about the origin of the issue | lhoestq | https://github.com/huggingface/datasets/pull/2268 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2268",
"html_url": "https://github.com/huggingface/datasets/pull/2268",
"diff_url": "https://github.com/huggingface/datasets/pull/2268.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2268.patch",
"merged_at": "2021-04-27T13:43... | true |
868,291,129 | 2,267 | DatasetDict save load Failing test in 1.6 not in 1.5 | open | [] | 2021-04-27T00:03:25 | 2021-05-28T15:27:34 | null | ## Describe the bug
We have a test that saves a DatasetDict to disk and then loads it from disk. In 1.6 there is an incompatibility in the schema.
Downgrading to `>1.6` -- fixes the problem.
## Steps to reproduce the bug
```python
### Load a dataset dict from jsonl
path = '/test/foo'
ds_dict.s... | timothyjlaurent | https://github.com/huggingface/datasets/issues/2267 | null | false |
867,864,353 | 2,266 | Make tests run faster | closed | [] | 2021-04-26T15:55:40 | 2021-04-29T10:00:13 | 2021-04-29T10:00:04 | From 7min to 2min to run pytest.
Ideally we should keep the whole CI run time below 10min.
In this PR I removed the remote tests that were never used.
I also replaced nested parametrized tests with unit tests.
This makes me think that we could still add more high level tests to check for a few combinations of par... | lhoestq | https://github.com/huggingface/datasets/pull/2266 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2266",
"html_url": "https://github.com/huggingface/datasets/pull/2266",
"diff_url": "https://github.com/huggingface/datasets/pull/2266.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2266.patch",
"merged_at": "2021-04-29T10:00... | true |
867,490,646 | 2,265 | Update black | closed | [] | 2021-04-26T09:35:09 | 2021-04-26T09:47:48 | 2021-04-26T09:47:47 | Latest black version 21.4b0 requires to reformat most dataset scripts and also the core code of the lib.
This makes the CI currently fail on master | lhoestq | https://github.com/huggingface/datasets/pull/2265 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2265",
"html_url": "https://github.com/huggingface/datasets/pull/2265",
"diff_url": "https://github.com/huggingface/datasets/pull/2265.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2265.patch",
"merged_at": "2021-04-26T09:47... | true |
867,476,228 | 2,264 | Fix memory issue in multiprocessing: Don't pickle table index | closed | [] | 2021-04-26T09:21:35 | 2021-04-26T10:30:28 | 2021-04-26T10:08:14 | The table index is currently being pickled when doing multiprocessing, which brings all the record batches of the dataset in memory.
I fixed that by not pickling the index attributes. Therefore each process has to rebuild the index when unpickling the table.
Fix issue #2256
We'll do a patch release asap ! | lhoestq | https://github.com/huggingface/datasets/pull/2264 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2264",
"html_url": "https://github.com/huggingface/datasets/pull/2264",
"diff_url": "https://github.com/huggingface/datasets/pull/2264.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2264.patch",
"merged_at": "2021-04-26T10:08... | true |
867,420,912 | 2,263 | test data added, dataset_infos updated | closed | [] | 2021-04-26T08:27:18 | 2021-04-29T09:30:21 | 2021-04-29T09:30:20 | Fixes #2262. Thanks for pointing out issue with dataset @jinmang2! | bhavitvyamalik | https://github.com/huggingface/datasets/pull/2263 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2263",
"html_url": "https://github.com/huggingface/datasets/pull/2263",
"diff_url": "https://github.com/huggingface/datasets/pull/2263.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2263.patch",
"merged_at": "2021-04-29T09:30... | true |
867,325,351 | 2,262 | NewsPH NLI dataset script fails to access test data. | closed | [] | 2021-04-26T06:44:41 | 2021-04-29T09:32:03 | 2021-04-29T09:30:20 | In Newsph-NLI Dataset (#1192), it fails to access test data.
According to the script below, the download manager will download the train data when trying to download the test data.
https://github.com/huggingface/datasets/blob/2a2dd6316af2cc7fdf24e4779312e8ee0c7ed98b/datasets/newsph_nli/newsph_nli.py#L71
If yo... | jinmang2 | https://github.com/huggingface/datasets/issues/2262 | null | false |
867,088,818 | 2,261 | Improve ReadInstruction logic and update docs | closed | [] | 2021-04-25T19:07:26 | 2021-05-17T18:24:44 | 2021-05-17T16:48:57 | Improve ReadInstruction logic and docs. | mariosasko | https://github.com/huggingface/datasets/pull/2261 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2261",
"html_url": "https://github.com/huggingface/datasets/pull/2261",
"diff_url": "https://github.com/huggingface/datasets/pull/2261.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2261.patch",
"merged_at": "2021-05-17T16:48... | true |
866,961,697 | 2,260 | GooAQ dataset added | closed | [] | 2021-04-25T09:26:48 | 2021-05-07T08:36:17 | 2021-05-07T08:36:17 | @lhoestq here the dataset is stored with Git LFS. Should I add option for manual downloading of dataset using `git lfs pull` post repo cloning or can we accommodate this in the current `download_and_extract`? | bhavitvyamalik | https://github.com/huggingface/datasets/pull/2260 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2260",
"html_url": "https://github.com/huggingface/datasets/pull/2260",
"diff_url": "https://github.com/huggingface/datasets/pull/2260.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2260.patch",
"merged_at": "2021-05-07T08:36... | true |
866,880,092 | 2,259 | Add support for Split.ALL | closed | [] | 2021-04-25T01:45:42 | 2021-06-28T08:21:27 | 2021-06-28T08:21:27 | The title says it all. | mariosasko | https://github.com/huggingface/datasets/pull/2259 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2259",
"html_url": "https://github.com/huggingface/datasets/pull/2259",
"diff_url": "https://github.com/huggingface/datasets/pull/2259.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2259.patch",
"merged_at": "2021-06-28T08:21... | true |
866,870,588 | 2,258 | Fix incorrect update_metadata_with_features calls in ArrowDataset | closed | [] | 2021-04-25T00:48:38 | 2021-04-26T17:16:30 | 2021-04-26T16:54:04 | Fixes bugs in the `unpdate_metadata_with_features` calls (caused by changes in #2151) | mariosasko | https://github.com/huggingface/datasets/pull/2258 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2258",
"html_url": "https://github.com/huggingface/datasets/pull/2258",
"diff_url": "https://github.com/huggingface/datasets/pull/2258.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2258.patch",
"merged_at": "2021-04-26T16:54... | true |
866,755,203 | 2,257 | added metrics for CUAD | closed | [] | 2021-04-24T14:09:54 | 2021-04-29T09:53:38 | 2021-04-27T16:16:32 | For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here | bhavitvyamalik | https://github.com/huggingface/datasets/pull/2257 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2257",
"html_url": "https://github.com/huggingface/datasets/pull/2257",
"diff_url": "https://github.com/huggingface/datasets/pull/2257.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2257.patch",
"merged_at": null
} | true |
866,708,609 | 2,256 | Running `datase.map` with `num_proc > 1` uses a lot of memory | closed | [] | 2021-04-24T09:56:20 | 2021-04-26T17:12:15 | 2021-04-26T17:12:15 | ## Describe the bug
Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dstc8_datset = load_dataset("roskoN/dstc8-reddit-corpus", keep_in_memory=False)
... | roskoN | https://github.com/huggingface/datasets/issues/2256 | null | false |
866,242,892 | 2,255 | Task casting for text classification & question answering | closed | [] | 2021-04-23T16:00:41 | 2021-05-18T13:31:36 | 2021-05-18T13:31:35 | This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-clas... | SBrandeis | https://github.com/huggingface/datasets/pull/2255 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2255",
"html_url": "https://github.com/huggingface/datasets/pull/2255",
"diff_url": "https://github.com/huggingface/datasets/pull/2255.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2255.patch",
"merged_at": "2021-05-18T13:31... | true |
866,169,312 | 2,254 | Update format, fingerprint and indices after add_item | closed | [] | 2021-04-23T14:31:49 | 2021-04-27T16:30:49 | 2021-04-27T16:30:48 | Added fingerprint and format update wrappers + update the indices by adding the index of the newly added item in the table. | lhoestq | https://github.com/huggingface/datasets/pull/2254 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2254",
"html_url": "https://github.com/huggingface/datasets/pull/2254",
"diff_url": "https://github.com/huggingface/datasets/pull/2254.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2254.patch",
"merged_at": "2021-04-27T16:30... | true |
866,034,321 | 2,253 | Perform minor refactoring: use config | closed | [] | 2021-04-23T11:45:47 | 2021-05-27T09:12:45 | 2021-04-27T15:02:59 | Perform minor refactoring related to `config`. | albertvillanova | https://github.com/huggingface/datasets/pull/2253 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2253",
"html_url": "https://github.com/huggingface/datasets/pull/2253",
"diff_url": "https://github.com/huggingface/datasets/pull/2253.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2253.patch",
"merged_at": "2021-04-27T15:02... | true |
865,870,710 | 2,252 | Slow dataloading with big datasets issue persists | closed | [] | 2021-04-23T08:18:20 | 2024-01-26T15:10:28 | 2024-01-26T15:10:28 | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total ... | hwijeen | https://github.com/huggingface/datasets/issues/2252 | null | false |
865,848,705 | 2,251 | while running run_qa.py, ran into a value error | open | [] | 2021-04-23T07:51:03 | 2021-04-23T07:51:03 | null | command:
python3 run_qa.py --model_name_or_path hyunwoongko/kobart --dataset_name squad_kor_v2 --do_train --do_eval --per_device_train_batch_size 8 --learning_rate 3e-5 --num_train_epochs 3 --max_seq_length 512 --doc_stride 128 --output_dir /tmp/debug_squad/
error:
ValueError: External fe... | nlee0212 | https://github.com/huggingface/datasets/issues/2251 | null | false |
865,402,449 | 2,250 | some issue in loading local txt file as Dataset for run_mlm.py | closed | [] | 2021-04-22T19:39:13 | 2022-03-30T08:29:47 | 2022-03-30T08:29:47 | 
first of all, I tried to load 3 .txt files as a dataset (sure that the directory and permission is OK.), I face with the below error.
> FileNotFoundError: [Errno 2] No such file or directory: 'c'
by ... | alighofrani95 | https://github.com/huggingface/datasets/issues/2250 | null | false |
865,257,826 | 2,249 | Allow downloading/processing/caching only specific splits | open | [] | 2021-04-22T17:51:44 | 2022-07-06T15:19:48 | null | Allow downloading/processing/caching only specific splits without downloading/processing/caching the other splits.
This PR implements two steps to handle only specific splits:
- it allows processing/caching only specific splits into Arrow files
- for some simple cases, it allows downloading only specific splits (w... | albertvillanova | https://github.com/huggingface/datasets/pull/2249 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2249",
"html_url": "https://github.com/huggingface/datasets/pull/2249",
"diff_url": "https://github.com/huggingface/datasets/pull/2249.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2249.patch",
"merged_at": null
} | true |
864,853,447 | 2,248 | Implement Dataset to JSON | closed | [] | 2021-04-22T11:46:51 | 2021-04-27T15:29:21 | 2021-04-27T15:29:20 | Implement `Dataset.to_json`. | albertvillanova | https://github.com/huggingface/datasets/pull/2248 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2248",
"html_url": "https://github.com/huggingface/datasets/pull/2248",
"diff_url": "https://github.com/huggingface/datasets/pull/2248.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2248.patch",
"merged_at": "2021-04-27T15:29... | true |
864,817,520 | 2,247 | Implement Dataset from Parquet | closed | [] | 2021-04-22T11:01:38 | 2021-07-26T13:28:52 | 2021-07-26T13:28:51 | Implement instantiation of Dataset from Parquet file. | albertvillanova | https://github.com/huggingface/datasets/pull/2247 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2247",
"html_url": "https://github.com/huggingface/datasets/pull/2247",
"diff_url": "https://github.com/huggingface/datasets/pull/2247.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2247.patch",
"merged_at": null
} | true |
864,220,031 | 2,246 | Faster map w/ input_columns & faster slicing w/ Iterable keys | closed | [] | 2021-04-21T19:49:07 | 2021-04-26T16:13:59 | 2021-04-26T16:13:59 | @lhoestq Fixes #2193
- `map` now uses `with_format` to only load needed columns in memory when `input_columns` is set
- Slicing datasets with Iterables of indices now uses a new `Table.fast_gather` method, implemented with `np.searchsorted`, to find the appropriate batch indices all at once. `pa.concat_tables` is ... | norabelrose | https://github.com/huggingface/datasets/pull/2246 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2246",
"html_url": "https://github.com/huggingface/datasets/pull/2246",
"diff_url": "https://github.com/huggingface/datasets/pull/2246.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2246.patch",
"merged_at": "2021-04-26T16:13... | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.