url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1B
node_id
stringlengths
18
32
number
int64
1
2.96k
title
stringlengths
1
268
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
list
created_at
int64
1,587B
1,632B
updated_at
int64
1,587B
1,632B
closed_at
int64
1,587B
1,632B
author_association
stringclasses
4 values
active_lock_reason
null
pull_request
dict
body
stringlengths
0
228k
timeline_url
stringlengths
67
70
performed_via_github_app
null
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/2236
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2236/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2236/comments
https://api.github.com/repos/huggingface/datasets/issues/2236/events
https://github.com/huggingface/datasets/issues/2236
861,388,145
MDU6SXNzdWU4NjEzODgxNDU=
2,236
Request to add StrategyQA dataset
{ "login": "sarahwie", "id": 8027676, "node_id": "MDQ6VXNlcjgwMjc2NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahwie", "html_url": "https://github.com/sarahwie", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "repos_url": "https://api.github.com/users/sarahwie/repos", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[]
1,618,843,586,000
1,618,843,586,000
null
NONE
null
null
## Request to add StrategyQA dataset - **Name:** StrategyQA - **Description:** open-domain QA [(project page)](https://allenai.org/data/strategyqa) - **Paper:** [url](https://arxiv.org/pdf/2101.02235.pdf) - **Data:** [here](https://allenai.org/data/strategyqa) - **Motivation:** uniquely-formulated dataset that also includes a question-decomposition breakdown and associated Wikipedia annotations for each step. Good for multi-hop reasoning modeling.
https://api.github.com/repos/huggingface/datasets/issues/2236/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2235
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2235/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2235/comments
https://api.github.com/repos/huggingface/datasets/issues/2235/events
https://github.com/huggingface/datasets/pull/2235
861,040,716
MDExOlB1bGxSZXF1ZXN0NjE3Nzc0NDUw
2,235
Update README.md
{ "login": "PierreColombo", "id": 22492839, "node_id": "MDQ6VXNlcjIyNDkyODM5", "avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PierreColombo", "html_url": "https://github.com/PierreColombo", "followers_url": "https://api.github.com/users/PierreColombo/followers", "following_url": "https://api.github.com/users/PierreColombo/following{/other_user}", "gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}", "starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions", "organizations_url": "https://api.github.com/users/PierreColombo/orgs", "repos_url": "https://api.github.com/users/PierreColombo/repos", "events_url": "https://api.github.com/users/PierreColombo/events{/privacy}", "received_events_url": "https://api.github.com/users/PierreColombo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,618,820,462,000
1,618,836,559,000
1,618,836,559,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2235", "html_url": "https://github.com/huggingface/datasets/pull/2235", "diff_url": "https://github.com/huggingface/datasets/pull/2235.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2235.patch" }
Adding relevant citations (paper accepted at AAAI 2020 & EMNLP 2020) to the benchmark
https://api.github.com/repos/huggingface/datasets/issues/2235/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2234
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2234/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2234/comments
https://api.github.com/repos/huggingface/datasets/issues/2234/events
https://github.com/huggingface/datasets/pull/2234
860,442,246
MDExOlB1bGxSZXF1ZXN0NjE3MzI4NDU3
2,234
Fix bash snippet formatting in ADD_NEW_DATASET.md
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,618,675,268,000
1,618,829,851,000
1,618,818,696,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2234", "html_url": "https://github.com/huggingface/datasets/pull/2234", "diff_url": "https://github.com/huggingface/datasets/pull/2234.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2234.patch" }
This PR indents the paragraphs around the bash snippets in ADD_NEW_DATASET.md to fix formatting.
https://api.github.com/repos/huggingface/datasets/issues/2234/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2233
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2233/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2233/comments
https://api.github.com/repos/huggingface/datasets/issues/2233/events
https://github.com/huggingface/datasets/pull/2233
860,097,084
MDExOlB1bGxSZXF1ZXN0NjE3MDYwMTkw
2,233
Fix `xnli` dataset tuple key
{ "login": "NikhilBartwal", "id": 42388668, "node_id": "MDQ6VXNlcjQyMzg4NjY4", "avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NikhilBartwal", "html_url": "https://github.com/NikhilBartwal", "followers_url": "https://api.github.com/users/NikhilBartwal/followers", "following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}", "gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}", "starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions", "organizations_url": "https://api.github.com/users/NikhilBartwal/orgs", "repos_url": "https://api.github.com/users/NikhilBartwal/repos", "events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}", "received_events_url": "https://api.github.com/users/NikhilBartwal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,618,600,362,000
1,618,822,602,000
1,618,822,602,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2233", "html_url": "https://github.com/huggingface/datasets/pull/2233", "diff_url": "https://github.com/huggingface/datasets/pull/2233.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2233.patch" }
Closes #2229 The `xnli` dataset yields a tuple key in case of `ar` which is inconsistant with the acceptable key types (str/int). The key was thus ported to `str` keeping the original information intact.
https://api.github.com/repos/huggingface/datasets/issues/2233/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2232
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2232/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2232/comments
https://api.github.com/repos/huggingface/datasets/issues/2232/events
https://github.com/huggingface/datasets/pull/2232
860,075,931
MDExOlB1bGxSZXF1ZXN0NjE3MDQyNTI4
2,232
Start filling GLUE dataset card
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I replaced all the \"we\" and applied your suggestion", "Merging this for now, we can continue improving this card in other PRs :)" ]
1,618,598,257,000
1,618,997,589,000
1,618,997,588,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2232", "html_url": "https://github.com/huggingface/datasets/pull/2232", "diff_url": "https://github.com/huggingface/datasets/pull/2232.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2232.patch" }
The dataset card was pretty much empty. I added the descriptions (mainly from TFDS since the script is the same), and I also added the tasks tags as well as examples for a subset of the tasks. cc @sgugger
https://api.github.com/repos/huggingface/datasets/issues/2232/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2231
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2231/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2231/comments
https://api.github.com/repos/huggingface/datasets/issues/2231/events
https://github.com/huggingface/datasets/pull/2231
859,850,488
MDExOlB1bGxSZXF1ZXN0NjE2ODYyNTEx
2,231
Fix map when removing columns on a formatted dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,618,582,135,000
1,618,585,805,000
1,618,585,804,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2231", "html_url": "https://github.com/huggingface/datasets/pull/2231", "diff_url": "https://github.com/huggingface/datasets/pull/2231.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2231.patch" }
This should fix issue #2226 The `remove_columns` argument was ignored on formatted datasets
https://api.github.com/repos/huggingface/datasets/issues/2231/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2230/comments
https://api.github.com/repos/huggingface/datasets/issues/2230/events
https://github.com/huggingface/datasets/issues/2230
859,817,159
MDU6SXNzdWU4NTk4MTcxNTk=
2,230
Keys yielded while generating dataset are not being checked
{ "login": "NikhilBartwal", "id": 42388668, "node_id": "MDQ6VXNlcjQyMzg4NjY4", "avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NikhilBartwal", "html_url": "https://github.com/NikhilBartwal", "followers_url": "https://api.github.com/users/NikhilBartwal/followers", "following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}", "gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}", "starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions", "organizations_url": "https://api.github.com/users/NikhilBartwal/orgs", "repos_url": "https://api.github.com/users/NikhilBartwal/repos", "events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}", "received_events_url": "https://api.github.com/users/NikhilBartwal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi ! Indeed there's no verification on the uniqueness nor the types of the keys.\r\nDo you already have some ideas of what you would like to implement and how ?", "Hey @lhoestq, thank you so much for the opportunity.\r\nAlthough I haven't had much experience with the HF Datasets code, after a careful look at how...
1,618,579,787,000
1,620,667,881,000
1,620,667,881,000
CONTRIBUTOR
null
null
The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not. Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196 Even after having a tuple as key, the dataset is generated without any warning. Also, as tested in the case of `anli` dataset (I tweeked the dataset script to use `1` as a key for every example): ``` >>> import datasets >>> nik = datasets.load_dataset('anli') Downloading and preparing dataset anli/plain_text (download: 17.76 MiB, generated: 73.55 MiB, post-processed: Unknown size, total: 91.31 MiB) to C:\Users\nikhil\.cache\huggingface\datasets\anli\plain_text\0.1.0\43fa2c99c10bf8478f1fa0860f7b122c6b277c4c41306255b7641257cf4e3299... 0 examples [00:00, ? examples/s]1 {'uid': '0fd0abfb-659e-4453-b196-c3a64d2d8267', 'premise': 'The Parma trolleybus system (Italian: "Rete filoviaria di Parma" ) forms part of the public transport network of the city and "comune" of Parma, in the region of Emilia-Romagna, northern Italy. In operation since 1953, the system presently comprises four urban routes.', 'hypothesis': 'The trolleybus system has over 2 urban routes', 'label': 'entailment', 'reason': ''} 2021-04-16 12:38:14.483968: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll 1 examples [00:01, 1.87s/ examples]1 {'uid': '7ed72ff4-40b7-4f8a-b1b9-6c612aa62c84', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Sharron Macready was a popular character through the 1980's.", 'label': 'neutral', 'reason': ''} 1 {'uid': '5d2930a3-62ac-485d-94d7-4e36cbbcd7b5', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Bastedo didn't keep any pets because of her views on animal rights.", 'label': 'neutral', 'reason': ''} 1 {'uid': '324db753-ddc9-4a85-a825-f09e2e5aebdd', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Alexandra Bastedo was named by her mother.', 'label': 'neutral', 'reason': ''} 1 {'uid': '4874f429-da0e-406a-90c7-22240ff3ddf8', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Bastedo cared for all the animals that inhabit the earth.', 'label': 'neutral', 'reason': ''} ``` Here also, the dataset was generated successfuly even hough it had same keys without any warning. The reason appears to stem from here: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L988 Here, although it has access to every key, but it is not being checked and the example is written directly: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L992 I would like to take this issue if you allow me. Thank You!
https://api.github.com/repos/huggingface/datasets/issues/2230/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2229
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2229/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2229/comments
https://api.github.com/repos/huggingface/datasets/issues/2229/events
https://github.com/huggingface/datasets/issues/2229
859,810,602
MDU6SXNzdWU4NTk4MTA2MDI=
2,229
`xnli` dataset creating a tuple key while yielding instead of `str` or `int`
{ "login": "NikhilBartwal", "id": 42388668, "node_id": "MDQ6VXNlcjQyMzg4NjY4", "avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NikhilBartwal", "html_url": "https://github.com/NikhilBartwal", "followers_url": "https://api.github.com/users/NikhilBartwal/followers", "following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}", "gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}", "starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions", "organizations_url": "https://api.github.com/users/NikhilBartwal/orgs", "repos_url": "https://api.github.com/users/NikhilBartwal/repos", "events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}", "received_events_url": "https://api.github.com/users/NikhilBartwal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Sure sounds good. Also if you find other datasets that use tuples instead of str/int, you can also fix them !\r\nthanks :)", "@lhoestq I have sent a PR for fixing the issue. Would be great if you could have a look! Thanks!" ]
1,618,579,313,000
1,618,822,602,000
1,618,822,602,000
CONTRIBUTOR
null
null
When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196 Since, community datasets in Tensorflow Datasets also use HF datasets, this causes a Tuple key error while loading HF's `xnli` dataset. I'm up for sending a fix for this, I think we can simply use `file_idx + "_" + row_idx` as a unique key instead of a tuple.
https://api.github.com/repos/huggingface/datasets/issues/2229/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2228
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2228/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2228/comments
https://api.github.com/repos/huggingface/datasets/issues/2228/events
https://github.com/huggingface/datasets/pull/2228
859,795,563
MDExOlB1bGxSZXF1ZXN0NjE2ODE2MTQz
2,228
[WIP] Add ArrayXD support for fixed size list.
{ "login": "jblemoine", "id": 22685854, "node_id": "MDQ6VXNlcjIyNjg1ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/22685854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jblemoine", "html_url": "https://github.com/jblemoine", "followers_url": "https://api.github.com/users/jblemoine/followers", "following_url": "https://api.github.com/users/jblemoine/following{/other_user}", "gists_url": "https://api.github.com/users/jblemoine/gists{/gist_id}", "starred_url": "https://api.github.com/users/jblemoine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jblemoine/subscriptions", "organizations_url": "https://api.github.com/users/jblemoine/orgs", "repos_url": "https://api.github.com/users/jblemoine/repos", "events_url": "https://api.github.com/users/jblemoine/events{/privacy}", "received_events_url": "https://api.github.com/users/jblemoine/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Awesome thanks ! To fix the CI you just need to merge master into your branch.\r\nThe error is unrelated to your PR" ]
1,618,578,248,000
1,618,837,338,000
null
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2228", "html_url": "https://github.com/huggingface/datasets/pull/2228", "diff_url": "https://github.com/huggingface/datasets/pull/2228.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2228.patch" }
Add support for fixed size list for ArrayXD when shape is known . See https://github.com/huggingface/datasets/issues/2146 Since offset are not stored anymore, the file size is now roughly equal to the actual data size.
https://api.github.com/repos/huggingface/datasets/issues/2228/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2227
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2227/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2227/comments
https://api.github.com/repos/huggingface/datasets/issues/2227/events
https://github.com/huggingface/datasets/pull/2227
859,771,526
MDExOlB1bGxSZXF1ZXN0NjE2Nzk1NjMx
2,227
Use update_metadata_with_features decorator in class_encode_column method
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,618,576,301,000
1,618,580,980,000
1,618,580,979,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2227", "html_url": "https://github.com/huggingface/datasets/pull/2227", "diff_url": "https://github.com/huggingface/datasets/pull/2227.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2227.patch" }
Following @mariosasko 's comment
https://api.github.com/repos/huggingface/datasets/issues/2227/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2226
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2226/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2226/comments
https://api.github.com/repos/huggingface/datasets/issues/2226/events
https://github.com/huggingface/datasets/issues/2226
859,720,302
MDU6SXNzdWU4NTk3MjAzMDI=
2,226
Batched map fails when removing all columns
{ "login": "villmow", "id": 2743060, "node_id": "MDQ6VXNlcjI3NDMwNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/villmow", "html_url": "https://github.com/villmow", "followers_url": "https://api.github.com/users/villmow/followers", "following_url": "https://api.github.com/users/villmow/following{/other_user}", "gists_url": "https://api.github.com/users/villmow/gists{/gist_id}", "starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/villmow/subscriptions", "organizations_url": "https://api.github.com/users/villmow/orgs", "repos_url": "https://api.github.com/users/villmow/repos", "events_url": "https://api.github.com/users/villmow/events{/privacy}", "received_events_url": "https://api.github.com/users/villmow/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "I found the problem. I called `set_format` on some columns before. This makes it crash. Here is a complete example to reproduce:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nsst = load_dataset(\"sst\")\r\nsst.set_format(\"torch\", columns=[\"label\"], output_all_columns=True)\r\nds = sst[\"train\"]\r\n...
1,618,571,821,000
1,618,585,841,000
null
NONE
null
null
Hi @lhoestq , I'm hijacking this issue, because I'm currently trying to do the approach you recommend: > Currently the optimal setup for single-column computations is probably to do something like > > ```python > result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names) > ``` Here is my code: (see edit, in which I added a simplified version ``` This is the error: ```bash pyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 8964 but got length 1000 ``` I wonder why this error occurs, when I delete every column? Can you give me a hint? ### Edit: I preprocessed my dataset before (using map with the features argument) and saved it to disk. May this be part of the error? I can iterate over the complete dataset and print every sample before calling map. There seems to be no other problem with the dataset. I tried to simplify the code that crashes: ```python # works log.debug(dataset.column_names) log.debug(dataset) for i, sample in enumerate(dataset): log.debug(i, sample) # crashes counted_dataset = dataset.map( lambda x: {"a": list(range(20))}, input_columns=column, remove_columns=dataset.column_names, load_from_cache_file=False, num_proc=num_workers, batched=True, ) ``` ``` pyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 20 but got length 1000 ``` Edit2: May this be a problem with a schema I set when preprocessing the dataset before? I tried to add the `features` argument to the function and then I get a new error: ```python # crashes counted_dataset = dataset.map( lambda x: {"a": list(range(20))}, input_columns=column, remove_columns=dataset.column_names, load_from_cache_file=False, num_proc=num_workers, batched=True, features=datasets.Features( { "a": datasets.Sequence(datasets.Value("int32")) } ) ) ``` ``` File "env/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1704, in _map_single writer.write_batch(batch) File "env/lib/python3.8/site-packages/datasets/arrow_writer.py", line 312, in write_batch col_type = schema.field(col).type if schema is not None else None File "pyarrow/types.pxi", line 1341, in pyarrow.lib.Schema.field KeyError: 'Column tokens does not exist in schema' ``` _Originally posted by @villmow in https://github.com/huggingface/datasets/issues/2193#issuecomment-820230874_
https://api.github.com/repos/huggingface/datasets/issues/2226/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2225
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2225/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2225/comments
https://api.github.com/repos/huggingface/datasets/issues/2225/events
https://github.com/huggingface/datasets/pull/2225
858,469,561
MDExOlB1bGxSZXF1ZXN0NjE1NzAzMTY4
2,225
fixed one instance of 'train' to 'test'
{ "login": "alexwdong", "id": 46733535, "node_id": "MDQ6VXNlcjQ2NzMzNTM1", "avatar_url": "https://avatars.githubusercontent.com/u/46733535?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexwdong", "html_url": "https://github.com/alexwdong", "followers_url": "https://api.github.com/users/alexwdong/followers", "following_url": "https://api.github.com/users/alexwdong/following{/other_user}", "gists_url": "https://api.github.com/users/alexwdong/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexwdong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexwdong/subscriptions", "organizations_url": "https://api.github.com/users/alexwdong/orgs", "repos_url": "https://api.github.com/users/alexwdong/repos", "events_url": "https://api.github.com/users/alexwdong/events{/privacy}", "received_events_url": "https://api.github.com/users/alexwdong/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks ! good catch\r\n\r\nCould you also update the metadata of this dataset ?\r\nYou can do so by running\r\n```\r\ndatasets-cli test ./datasets/newsgroup --all_configs --save_infos --ignore_verifications\r\n```\r\nThis should update the dataset_infos.json file that contains the size of all the splits for exampl...
1,618,460,800,000
1,618,524,590,000
1,618,521,549,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2225", "html_url": "https://github.com/huggingface/datasets/pull/2225", "diff_url": "https://github.com/huggingface/datasets/pull/2225.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2225.patch" }
I believe this should be 'test' instead of 'train'
https://api.github.com/repos/huggingface/datasets/issues/2225/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2224
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2224/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2224/comments
https://api.github.com/repos/huggingface/datasets/issues/2224/events
https://github.com/huggingface/datasets/issues/2224
857,983,361
MDU6SXNzdWU4NTc5ODMzNjE=
2,224
Raise error if Windows max path length is not disabled
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,618,412,240,000
1,618,412,353,000
null
MEMBER
null
null
On startup, raise an error if Windows max path length is not disabled; ask the user to disable it. Linked to discussion in #2220.
https://api.github.com/repos/huggingface/datasets/issues/2224/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2223
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2223/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2223/comments
https://api.github.com/repos/huggingface/datasets/issues/2223/events
https://github.com/huggingface/datasets/pull/2223
857,870,800
MDExOlB1bGxSZXF1ZXN0NjE1MjE4MDIz
2,223
Set test cache config
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> why a cache dir per test function does not work?\r\n\r\nProbably because we end up with multiple `datasets_module` in the python path. This breaks the import of all the datasets/metrics modules.\r\nIf you want to use one modules cache per test, you may need remove the `datasets_module` that was added to the pyt...
1,618,404,924,000
1,618,513,885,000
1,618,513,885,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2223", "html_url": "https://github.com/huggingface/datasets/pull/2223", "diff_url": "https://github.com/huggingface/datasets/pull/2223.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2223.patch" }
Currently, running the tests populates the default cache directory `"~/.cache"`. This PR monkey-patches the config to set the cache directory within the temporary test directory, avoiding side effects.
https://api.github.com/repos/huggingface/datasets/issues/2223/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2222
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2222/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2222/comments
https://api.github.com/repos/huggingface/datasets/issues/2222/events
https://github.com/huggingface/datasets/pull/2222
857,847,231
MDExOlB1bGxSZXF1ZXN0NjE1MTk5MTM5
2,222
Fix too long WindowsFileLock name
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892913, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": "This will not be worked on" } ]
closed
false
null
[]
null
[ "Windows users should disable the max path length limit. It's a nightmare to handle it.\r\nAlso the lock path must not be changed in a random way. Otherwise from another process the lock path might not be the same and the locking mechanism won't work.", "Do you agree with handling the case where MAX_PATH is not d...
1,618,403,212,000
1,618,412,425,000
1,618,411,579,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2222", "html_url": "https://github.com/huggingface/datasets/pull/2222", "diff_url": "https://github.com/huggingface/datasets/pull/2222.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2222.patch" }
Fix WindowsFileLock name longer than allowed MAX_PATH by shortening the basename.
https://api.github.com/repos/huggingface/datasets/issues/2222/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2221
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2221/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2221/comments
https://api.github.com/repos/huggingface/datasets/issues/2221/events
https://github.com/huggingface/datasets/pull/2221
857,833,770
MDExOlB1bGxSZXF1ZXN0NjE1MTg4MTE5
2,221
Add SLR70 - SLR80 and SLR86 to OpenSLR dataset
{ "login": "cahya-wirawan", "id": 7669893, "node_id": "MDQ6VXNlcjc2Njk4OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cahya-wirawan", "html_url": "https://github.com/cahya-wirawan", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,618,402,158,000
1,618,408,219,000
1,618,408,219,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2221", "html_url": "https://github.com/huggingface/datasets/pull/2221", "diff_url": "https://github.com/huggingface/datasets/pull/2221.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2221.patch" }
I would like to add SLR70, SLR71, SLR72, SLR73, SLR74, SLR75, SLR76, SLR77, SLR78, SLR79, SLR80 and SLR86 to OpenSLR dataset. The languages are: Nigerian English, Chilean Spanish, Columbian Spanish, Peruvian Spanish, Puerto Rico Spanish, Venezuelan Spanish, Basque, Galician, Gujarati and Kannada.
https://api.github.com/repos/huggingface/datasets/issues/2221/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2220
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2220/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2220/comments
https://api.github.com/repos/huggingface/datasets/issues/2220/events
https://github.com/huggingface/datasets/pull/2220
857,774,626
MDExOlB1bGxSZXF1ZXN0NjE1MTM4NDQz
2,220
Fix infinite loop in WindowsFileLock
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892913, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": "This will not be worked on" } ]
closed
false
null
[]
null
[ "How is it possible to get an infinite loop ? Can you add more details ?", "Yes, in Windows, if the filename is too long, a `FileNotFoundError` is raised. The exception should be raised in this case. Otherwise, we get into an infinite loop.\r\n\r\nIf other process has the file locked, then `PermissionError` is ra...
1,618,397,398,000
1,618,412,390,000
1,618,412,374,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2220", "html_url": "https://github.com/huggingface/datasets/pull/2220", "diff_url": "https://github.com/huggingface/datasets/pull/2220.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2220.patch" }
Raise exception to avoid infinite loop.
https://api.github.com/repos/huggingface/datasets/issues/2220/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2219
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2219/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2219/comments
https://api.github.com/repos/huggingface/datasets/issues/2219/events
https://github.com/huggingface/datasets/pull/2219
857,321,242
MDExOlB1bGxSZXF1ZXN0NjE0NzYxMzA3
2,219
Added CUAD dataset
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "1) Changed the language in a few places apart from those you mentioned in README\r\n2) Reduced the size of dummy data folder by removing all other entries except the first\r\n3) Updated YAML tags by using to the past version of `datasets-tagging` app. Will update the quick fix on that repository too in a while", ...
1,618,347,903,000
1,619,274,351,000
1,618,563,044,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2219", "html_url": "https://github.com/huggingface/datasets/pull/2219", "diff_url": "https://github.com/huggingface/datasets/pull/2219.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2219.patch" }
Dataset link : https://github.com/TheAtticusProject/cuad/ Working on README.md currently. Closes #2084 and [#1](https://github.com/TheAtticusProject/cuad/issues/1).
https://api.github.com/repos/huggingface/datasets/issues/2219/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2218
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2218/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2218/comments
https://api.github.com/repos/huggingface/datasets/issues/2218/events
https://github.com/huggingface/datasets/issues/2218
857,238,435
MDU6SXNzdWU4NTcyMzg0MzU=
2,218
Duplicates in the LAMA dataset
{ "login": "amarasovic", "id": 7276193, "node_id": "MDQ6VXNlcjcyNzYxOTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7276193?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amarasovic", "html_url": "https://github.com/amarasovic", "followers_url": "https://api.github.com/users/amarasovic/followers", "following_url": "https://api.github.com/users/amarasovic/following{/other_user}", "gists_url": "https://api.github.com/users/amarasovic/gists{/gist_id}", "starred_url": "https://api.github.com/users/amarasovic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amarasovic/subscriptions", "organizations_url": "https://api.github.com/users/amarasovic/orgs", "repos_url": "https://api.github.com/users/amarasovic/repos", "events_url": "https://api.github.com/users/amarasovic/events{/privacy}", "received_events_url": "https://api.github.com/users/amarasovic/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi,\r\n\r\ncurrently the datasets API doesn't have a dedicated function to remove duplicate rows, but since the LAMA dataset is not too big (it fits in RAM), we can leverage pandas to help us remove duplicates:\r\n```python\r\n>>> from datasets import load_dataset, Dataset\r\n>>> dataset = load_dataset('lama', spl...
1,618,340,389,000
1,618,436,547,000
null
NONE
null
null
I observed duplicates in the LAMA probing dataset, see a minimal code below. ``` >>> import datasets >>> dataset = datasets.load_dataset('lama') No config specified, defaulting to: lama/trex Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c1fe1ec0d6b5eece7bddc) >>> train_dataset = dataset['train'] >>> train_dataset[0] {'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi ʒyl tʁɔʃy]; 12 March 1815 – 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'} >>> train_dataset[1] {'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi ʒyl tʁɔʃy]; 12 March 1815 – 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'} ``` I checked the original data available at https://dl.fbaipublicfiles.com/LAMA/data.zip. This particular duplicated comes from: ``` {"uuid": "40b2ed1c-0961-482e-844e-32596b6117c8", "obj_uri": "Q150", "obj_label": "French", "sub_uri": "Q441235", "sub_label": "Louis Jules Trochu", "predicate_id": "P103", "evidences": [{"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}, {"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}]} ``` What is the best way to deal with these duplicates if I want to use `datasets` to probe with LAMA?
https://api.github.com/repos/huggingface/datasets/issues/2218/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2217
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2217/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2217/comments
https://api.github.com/repos/huggingface/datasets/issues/2217/events
https://github.com/huggingface/datasets/pull/2217
857,011,314
MDExOlB1bGxSZXF1ZXN0NjE0NTAxNjIz
2,217
Revert breaking change in cache_files property
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,618,323,604,000
1,618,410,264,000
1,618,410,263,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2217", "html_url": "https://github.com/huggingface/datasets/pull/2217", "diff_url": "https://github.com/huggingface/datasets/pull/2217.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2217.patch" }
#2025 changed the format of `Dataset.cache_files`. Before it was formatted like ```python [{"filename": "path/to/file.arrow", "start": 0, "end": 1337}] ``` and it was changed to ```python ["path/to/file.arrow"] ``` since there's no start/end offsets available anymore. To make this less breaking, I'm setting the format back to a list of dicts: ```python [{"filename": "path/to/file.arrow"}] ```
https://api.github.com/repos/huggingface/datasets/issues/2217/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2216
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2216/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2216/comments
https://api.github.com/repos/huggingface/datasets/issues/2216/events
https://github.com/huggingface/datasets/pull/2216
856,955,534
MDExOlB1bGxSZXF1ZXN0NjE0NDU0MjE1
2,216
added real label for glue/mrpc to test set
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,618,320,020,000
1,618,322,000,000
1,618,321,999,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2216", "html_url": "https://github.com/huggingface/datasets/pull/2216", "diff_url": "https://github.com/huggingface/datasets/pull/2216.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2216.patch" }
Added real label to `glue.py` `mrpc` task for test split.
https://api.github.com/repos/huggingface/datasets/issues/2216/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2215
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2215/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2215/comments
https://api.github.com/repos/huggingface/datasets/issues/2215/events
https://github.com/huggingface/datasets/pull/2215
856,716,791
MDExOlB1bGxSZXF1ZXN0NjE0MjUyNTEy
2,215
Add datasets SLR35 and SLR36 to OpenSLR
{ "login": "cahya-wirawan", "id": 7669893, "node_id": "MDQ6VXNlcjc2Njk4OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cahya-wirawan", "html_url": "https://github.com/cahya-wirawan", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\nCould you please help me, I got this error message in all \"ci/circleci: run_dataset_script_tests_pyarrow*\" tests:\r\n```\r\n...\r\n \"\"\"Wrapper classes for various types of tokenization.\"\"\"\r\n \r\n from bleurt.lib import bert_tokenization\r\n import tensorflow.compat.v1 as tf\r\...
1,618,302,247,000
1,618,322,714,000
1,618,322,714,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2215", "html_url": "https://github.com/huggingface/datasets/pull/2215", "diff_url": "https://github.com/huggingface/datasets/pull/2215.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2215.patch" }
I would like to add [SLR35](https://openslr.org/35/) (18GB) and [SLR36](https://openslr.org/36/) (22GB) which are Large Javanese and Sundanese ASR training data set collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada in Indonesia.
https://api.github.com/repos/huggingface/datasets/issues/2215/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2214
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2214/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2214/comments
https://api.github.com/repos/huggingface/datasets/issues/2214/events
https://github.com/huggingface/datasets/issues/2214
856,333,657
MDU6SXNzdWU4NTYzMzM2NTc=
2,214
load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
{ "login": "nsaphra", "id": 414788, "node_id": "MDQ6VXNlcjQxNDc4OA==", "avatar_url": "https://avatars.githubusercontent.com/u/414788?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nsaphra", "html_url": "https://github.com/nsaphra", "followers_url": "https://api.github.com/users/nsaphra/followers", "following_url": "https://api.github.com/users/nsaphra/following{/other_user}", "gists_url": "https://api.github.com/users/nsaphra/gists{/gist_id}", "starred_url": "https://api.github.com/users/nsaphra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nsaphra/subscriptions", "organizations_url": "https://api.github.com/users/nsaphra/orgs", "repos_url": "https://api.github.com/users/nsaphra/repos", "events_url": "https://api.github.com/users/nsaphra/events{/privacy}", "received_events_url": "https://api.github.com/users/nsaphra/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @nsaphra, thanks for reporting.\r\n\r\nThis issue was fixed in `datasets` version 1.3.0. Could you please update `datasets` and tell me if the problem persists?\r\n```shell\r\npip install -U datasets\r\n```", "There might be a bug in the conda version of `datasets` 1.2.1 where the datasets/metric scripts are ...
1,618,259,161,000
1,619,191,202,000
1,619,191,202,000
NONE
null
null
I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package. ```python >>> from datasets import load_metric >>> metric = load_metric("glue", "sst2") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 502, in load_metric File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 66, in import_main_class File "/ext3/miniconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/ns4008/.cache/huggingface/modules/datasets_modules/metrics/glue/e4606ab9804a36bcd5a9cebb2cb65bb14b6ac78ee9e6d5981fa679a495dd55de/glue.py", line 105, in <module> @datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) AttributeError: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' ```
https://api.github.com/repos/huggingface/datasets/issues/2214/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2213
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2213/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2213/comments
https://api.github.com/repos/huggingface/datasets/issues/2213/events
https://github.com/huggingface/datasets/pull/2213
856,025,320
MDExOlB1bGxSZXF1ZXN0NjEzNjcwODk2
2,213
Fix lc_quad download checksum
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,618,237,019,000
1,618,437,894,000
1,618,407,745,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2213", "html_url": "https://github.com/huggingface/datasets/pull/2213", "diff_url": "https://github.com/huggingface/datasets/pull/2213.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2213.patch" }
Fixes #2211
https://api.github.com/repos/huggingface/datasets/issues/2213/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2212
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2212/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2212/comments
https://api.github.com/repos/huggingface/datasets/issues/2212/events
https://github.com/huggingface/datasets/issues/2212
855,999,133
MDU6SXNzdWU4NTU5OTkxMzM=
2,212
Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset
{ "login": "hanss0n", "id": 21348833, "node_id": "MDQ6VXNlcjIxMzQ4ODMz", "avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hanss0n", "html_url": "https://github.com/hanss0n", "followers_url": "https://api.github.com/users/hanss0n/followers", "following_url": "https://api.github.com/users/hanss0n/following{/other_user}", "gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}", "starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions", "organizations_url": "https://api.github.com/users/hanss0n/orgs", "repos_url": "https://api.github.com/users/hanss0n/repos", "events_url": "https://api.github.com/users/hanss0n/events{/privacy}", "received_events_url": "https://api.github.com/users/hanss0n/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! Apparently the data are not available from this url anymore. We'll replace it with the new url when it's available", "I saw this on their website when we request to download the dataset:\r\n![image](https://user-images.githubusercontent.com/19718818/114879600-fa458680-9e1e-11eb-9e05-f0963d68ff0f.png)\r\n\r\...
1,618,235,396,000
1,621,289,826,000
null
NONE
null
null
I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running: ```Python fquad = load_dataset("fquad") ``` which produces the following error: ``` Using custom data configuration default Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, post-processed: Unknown size, total: 9.76 MiB) to /root/.cache/huggingface/datasets/fquad/default/0.1.0/778dc2c85813d05ddd0c17087294d5f8f24820752340958070876b677af9f061... --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) <ipython-input-48-a2721797e23b> in <module>() ----> 1 fquad = load_dataset("fquad") 11 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 614 raise FileNotFoundError("Couldn't find file at {}".format(url)) 615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") --> 616 raise ConnectionError("Couldn't reach {}".format(url)) 617 618 # Try a second time ConnectionError: Couldn't reach https://storage.googleapis.com/illuin/fquad/train.json.zip ``` Does anyone know why that is and how to fix it?
https://api.github.com/repos/huggingface/datasets/issues/2212/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2211
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2211/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2211/comments
https://api.github.com/repos/huggingface/datasets/issues/2211/events
https://github.com/huggingface/datasets/issues/2211
855,988,410
MDU6SXNzdWU4NTU5ODg0MTA=
2,211
Getting checksum error when trying to load lc_quad dataset
{ "login": "hanss0n", "id": 21348833, "node_id": "MDQ6VXNlcjIxMzQ4ODMz", "avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hanss0n", "html_url": "https://github.com/hanss0n", "followers_url": "https://api.github.com/users/hanss0n/followers", "following_url": "https://api.github.com/users/hanss0n/following{/other_user}", "gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}", "starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions", "organizations_url": "https://api.github.com/users/hanss0n/orgs", "repos_url": "https://api.github.com/users/hanss0n/repos", "events_url": "https://api.github.com/users/hanss0n/events{/privacy}", "received_events_url": "https://api.github.com/users/hanss0n/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nI've already opened a PR with the fix. If you are in a hurry, just build the project from source and run:\r\n```bash\r\ndatasets-cli test datasets/lc_quad --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\n", "Ah sorry, I tried searching but couldn't find any related PR. \r\n\r\nThank you...
1,618,234,738,000
1,618,407,745,000
1,618,407,745,000
NONE
null
null
I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running: ```Python lc_quad = load_dataset("lc_quad") ``` which is giving me the following error: ``` Using custom data configuration default Downloading and preparing dataset lc_quad/default (download: 3.69 MiB, generated: 19.77 MiB, post-processed: Unknown size, total: 23.46 MiB) to /root/.cache/huggingface/datasets/lc_quad/default/2.0.0/5a98fe174603f5dec6df07edf1c2b4d2317210d2ad61f5a393839bca4d64e5a7... --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-42-404ace83f73c> in <module>() ----> 1 lc_quad = load_dataset("lc_quad") 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/AskNowQA/LC-QuAD2.0/archive/master.zip'] ``` Does anyone know why this could be and how I fix it?
https://api.github.com/repos/huggingface/datasets/issues/2211/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2210
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2210/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2210/comments
https://api.github.com/repos/huggingface/datasets/issues/2210/events
https://github.com/huggingface/datasets/issues/2210
855,709,400
MDU6SXNzdWU4NTU3MDk0MDA=
2,210
dataloading slow when using HUGE dataset
{ "login": "hwijeen", "id": 29157715, "node_id": "MDQ6VXNlcjI5MTU3NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hwijeen", "html_url": "https://github.com/hwijeen", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "repos_url": "https://api.github.com/users/hwijeen/repos", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Yes this is an issue with `datasets<=1.5.0`\r\nThis issue has been fixed by #2122 , we'll do a new release soon :)\r\nFor now you can test it on the `master` branch.", "Hi, thank you for your answer. I did not realize that my issue stems from the same problem. " ]
1,618,216,382,000
1,618,279,385,000
1,618,279,385,000
NONE
null
null
Hi, When I use datasets with 600GB data, the dataloading speed increases significantly. I am experimenting with two datasets, and one is about 60GB and the other 600GB. Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training. When looking at the pytorch-lightning supported profile of two different runs, I see that fetching a batch(`get_train_batch`) consumes an unreasonable amount of time when data is large. What could be the cause? * 60GB data ``` Action | Mean duration (s) |Num calls | Total time (s) | Percentage % | ------------------------------------------------------------------------------------------------------------------------------------ Total | - |_ | 200.33 | 100 % | ------------------------------------------------------------------------------------------------------------------------------------ run_training_epoch | 71.994 |1 | 71.994 | 35.937 | run_training_batch | 0.64373 |100 | 64.373 | 32.133 | optimizer_step_and_closure_0 | 0.64322 |100 | 64.322 | 32.108 | training_step_and_backward | 0.61004 |100 | 61.004 | 30.452 | model_backward | 0.37552 |100 | 37.552 | 18.745 | model_forward | 0.22813 |100 | 22.813 | 11.387 | training_step | 0.22759 |100 | 22.759 | 11.361 | get_train_batch | 0.066385 |100 | 6.6385 | 3.3138 | ``` * 600GB data ``` Action | Mean duration (s) |Num calls | Total time (s) | Percentage % | ------------------------------------------------------------------------------------------------------------------------------------ Total | - |_ | 3285.6 | 100 % | ------------------------------------------------------------------------------------------------------------------------------------ run_training_epoch | 1397.9 |1 | 1397.9 | 42.546 | run_training_batch | 7.2596 |100 | 725.96 | 22.095 | optimizer_step_and_closure_0 | 7.2589 |100 | 725.89 | 22.093 | training_step_and_backward | 7.223 |100 | 722.3 | 21.984 | model_backward | 6.9662 |100 | 696.62 | 21.202 | get_train_batch | 6.322 |100 | 632.2 | 19.241 | model_forward | 0.24902 |100 | 24.902 | 0.75789 | training_step | 0.2485 |100 | 24.85 | 0.75633 | ```
https://api.github.com/repos/huggingface/datasets/issues/2210/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2209
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2209/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2209/comments
https://api.github.com/repos/huggingface/datasets/issues/2209/events
https://github.com/huggingface/datasets/pull/2209
855,638,232
MDExOlB1bGxSZXF1ZXN0NjEzMzQwMTI2
2,209
Add code of conduct to the project
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[]
1,618,211,774,000
1,618,250,152,000
1,618,250,152,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2209", "html_url": "https://github.com/huggingface/datasets/pull/2209", "diff_url": "https://github.com/huggingface/datasets/pull/2209.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2209.patch" }
Add code of conduct to the project and link it from README and CONTRIBUTING. This was already done in `transformers`.
https://api.github.com/repos/huggingface/datasets/issues/2209/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2208
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2208/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2208/comments
https://api.github.com/repos/huggingface/datasets/issues/2208/events
https://github.com/huggingface/datasets/pull/2208
855,343,835
MDExOlB1bGxSZXF1ZXN0NjEzMTAxMzMw
2,208
Remove Python2 leftovers
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
1,618,157,283,000
1,618,437,936,000
1,618,407,651,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2208", "html_url": "https://github.com/huggingface/datasets/pull/2208", "diff_url": "https://github.com/huggingface/datasets/pull/2208.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2208.patch" }
This PR removes Python2 leftovers since this project aims for Python3.6+ (and as of 2020 Python2 is no longer officially supported)
https://api.github.com/repos/huggingface/datasets/issues/2208/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2207
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2207/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2207/comments
https://api.github.com/repos/huggingface/datasets/issues/2207/events
https://github.com/huggingface/datasets/issues/2207
855,267,383
MDU6SXNzdWU4NTUyNjczODM=
2,207
making labels consistent across the datasets
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! The ClassLabel feature type encodes the labels as integers.\r\nThe integer corresponds to the index of the label name in the `names` list of the ClassLabel.\r\nHere that means that the labels are 'entailment' (0), 'neutral' (1), 'contradiction' (2).\r\n\r\nYou can get the label names back by using `a.features...
1,618,135,436,000
1,618,408,920,000
null
NONE
null
null
Hi For accessing the labels one can type ``` >>> a.features['label'] ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None) ``` The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if one try to access as above they are entailment, neutral,contradiction, it would be great to have the labels consistent. thanks
https://api.github.com/repos/huggingface/datasets/issues/2207/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2206/comments
https://api.github.com/repos/huggingface/datasets/issues/2206/events
https://github.com/huggingface/datasets/issues/2206
855,252,415
MDU6SXNzdWU4NTUyNTI0MTU=
2,206
Got pyarrow error when loading a dataset while adding special tokens into the tokenizer
{ "login": "yana-xuyan", "id": 38536635, "node_id": "MDQ6VXNlcjM4NTM2NjM1", "avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yana-xuyan", "html_url": "https://github.com/yana-xuyan", "followers_url": "https://api.github.com/users/yana-xuyan/followers", "following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}", "gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions", "organizations_url": "https://api.github.com/users/yana-xuyan/orgs", "repos_url": "https://api.github.com/users/yana-xuyan/repos", "events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}", "received_events_url": "https://api.github.com/users/yana-xuyan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi,\r\n\r\nthe output of the tokenizers is treated specially in the lib to optimize the dataset size (see the code [here](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L138-L141)). It looks like that one of the values in a dictionary returned by the tokenizer is out of the assume...
1,618,130,409,000
1,618,380,386,000
null
NONE
null
null
I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below: Traceback (most recent call last): File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_single writer.write(example) File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 296, in write self.write_on_file() File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 270, in write_on_file pa_array = pa.array(typed_sequence) File "pyarrow/array.pxi", line 222, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 108, in __arrow_array__ out = out.cast(pa.list_(self.optimized_int_type)) File "pyarrow/array.pxi", line 810, in pyarrow.lib.Array.cast File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/pyarrow/compute.py", line 281, in cast return call_function("cast", [arr], options) File "pyarrow/_compute.pyx", line 465, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 294, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Integer value 50259 not in range: -128 to 127 Do you have any idea about it?
https://api.github.com/repos/huggingface/datasets/issues/2206/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2205
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2205/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2205/comments
https://api.github.com/repos/huggingface/datasets/issues/2205/events
https://github.com/huggingface/datasets/pull/2205
855,207,605
MDExOlB1bGxSZXF1ZXN0NjEzMDAwMzYw
2,205
Updating citation information on LinCE readme
{ "login": "gaguilar", "id": 5833357, "node_id": "MDQ6VXNlcjU4MzMzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gaguilar", "html_url": "https://github.com/gaguilar", "followers_url": "https://api.github.com/users/gaguilar/followers", "following_url": "https://api.github.com/users/gaguilar/following{/other_user}", "gists_url": "https://api.github.com/users/gaguilar/gists{/gist_id}", "starred_url": "https://api.github.com/users/gaguilar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gaguilar/subscriptions", "organizations_url": "https://api.github.com/users/gaguilar/orgs", "repos_url": "https://api.github.com/users/gaguilar/repos", "events_url": "https://api.github.com/users/gaguilar/events{/privacy}", "received_events_url": "https://api.github.com/users/gaguilar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,618,111,085,000
1,618,250,014,000
1,618,250,014,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2205", "html_url": "https://github.com/huggingface/datasets/pull/2205", "diff_url": "https://github.com/huggingface/datasets/pull/2205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2205.patch" }
Hi! I just updated the citation information in this PR. It had an additional bibtex from one of the datasets used in LinCE and then the LinCE bibtex. I removed the former and added a link that shows the full list of citations for each dataset. Thanks!
https://api.github.com/repos/huggingface/datasets/issues/2205/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2204
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2204/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2204/comments
https://api.github.com/repos/huggingface/datasets/issues/2204/events
https://github.com/huggingface/datasets/pull/2204
855,144,431
MDExOlB1bGxSZXF1ZXN0NjEyOTU1MzM2
2,204
Add configurable options to `seqeval` metric
{ "login": "marrodion", "id": 44571847, "node_id": "MDQ6VXNlcjQ0NTcxODQ3", "avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marrodion", "html_url": "https://github.com/marrodion", "followers_url": "https://api.github.com/users/marrodion/followers", "following_url": "https://api.github.com/users/marrodion/following{/other_user}", "gists_url": "https://api.github.com/users/marrodion/gists{/gist_id}", "starred_url": "https://api.github.com/users/marrodion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marrodion/subscriptions", "organizations_url": "https://api.github.com/users/marrodion/orgs", "repos_url": "https://api.github.com/users/marrodion/repos", "events_url": "https://api.github.com/users/marrodion/events{/privacy}", "received_events_url": "https://api.github.com/users/marrodion/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,618,084,699,000
1,618,494,586,000
1,618,494,586,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2204", "html_url": "https://github.com/huggingface/datasets/pull/2204", "diff_url": "https://github.com/huggingface/datasets/pull/2204.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2204.patch" }
Fixes #2148 Adds options to use strict mode, different schemes of evaluation, sample weight and adjust zero_division behavior, if encountered. `seqeval` provides schemes as objects, hence dynamic import from string, to avoid making the user do the import (thanks to @albertvillanova for the `importlib` idea).
https://api.github.com/repos/huggingface/datasets/issues/2204/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2203
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2203/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2203/comments
https://api.github.com/repos/huggingface/datasets/issues/2203/events
https://github.com/huggingface/datasets/pull/2203
855,053,595
MDExOlB1bGxSZXF1ZXN0NjEyODg4MzA5
2,203
updated banking77 train and test data
{ "login": "hsali", "id": 6765330, "node_id": "MDQ6VXNlcjY3NjUzMzA=", "avatar_url": "https://avatars.githubusercontent.com/u/6765330?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hsali", "html_url": "https://github.com/hsali", "followers_url": "https://api.github.com/users/hsali/followers", "following_url": "https://api.github.com/users/hsali/following{/other_user}", "gists_url": "https://api.github.com/users/hsali/gists{/gist_id}", "starred_url": "https://api.github.com/users/hsali/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hsali/subscriptions", "organizations_url": "https://api.github.com/users/hsali/orgs", "repos_url": "https://api.github.com/users/hsali/repos", "events_url": "https://api.github.com/users/hsali/events{/privacy}", "received_events_url": "https://api.github.com/users/hsali/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Can you add a description regarding this PR ? Why do you think we need to update the dummy data used to test the `banking77` dataset loading script ?", "Closing for inactivity. Feel free to re-open if you want to push this change" ]
1,618,056,610,000
1,619,188,419,000
1,619,188,419,000
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2203", "html_url": "https://github.com/huggingface/datasets/pull/2203", "diff_url": "https://github.com/huggingface/datasets/pull/2203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2203.patch" }
https://api.github.com/repos/huggingface/datasets/issues/2203/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2202
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2202/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2202/comments
https://api.github.com/repos/huggingface/datasets/issues/2202/events
https://github.com/huggingface/datasets/pull/2202
854,501,109
MDExOlB1bGxSZXF1ZXN0NjEyNDM2ODMx
2,202
Add classes GenerateMode, DownloadConfig and Version to the documentation
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,973,099,000
1,618,250,280,000
1,618,250,279,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2202", "html_url": "https://github.com/huggingface/datasets/pull/2202", "diff_url": "https://github.com/huggingface/datasets/pull/2202.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2202.patch" }
Add documentation for classes `GenerateMode`, `DownloadConfig` and `Version`. Update the docstring of `load_dataset` to create cross-reference links to the classes. Related to #2187.
https://api.github.com/repos/huggingface/datasets/issues/2202/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2201
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2201/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2201/comments
https://api.github.com/repos/huggingface/datasets/issues/2201/events
https://github.com/huggingface/datasets/pull/2201
854,499,563
MDExOlB1bGxSZXF1ZXN0NjEyNDM1NTE3
2,201
Fix ArrowWriter overwriting features in ArrowBasedBuilder
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,972,979,000
1,618,234,337,000
1,618,234,336,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2201", "html_url": "https://github.com/huggingface/datasets/pull/2201", "diff_url": "https://github.com/huggingface/datasets/pull/2201.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2201.patch" }
This should fix the issues with CSV loading experienced in #2153 and #2200. The CSV builder is an ArrowBasedBuilder that had an issue with its ArrowWriter used to write the arrow file from the csv data. The writer wasn't initialized with the features passed by the user. Therefore the writer was inferring the features from the arrow data, discarding the features passed by the user. I fixed that and I updated the tests
https://api.github.com/repos/huggingface/datasets/issues/2201/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2200
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2200/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2200/comments
https://api.github.com/repos/huggingface/datasets/issues/2200/events
https://github.com/huggingface/datasets/issues/2200
854,449,656
MDU6SXNzdWU4NTQ0NDk2NTY=
2,200
_prepare_split will overwrite DatasetBuilder.info.features
{ "login": "Gforky", "id": 4157614, "node_id": "MDQ6VXNlcjQxNTc2MTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4157614?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Gforky", "html_url": "https://github.com/Gforky", "followers_url": "https://api.github.com/users/Gforky/followers", "following_url": "https://api.github.com/users/Gforky/following{/other_user}", "gists_url": "https://api.github.com/users/Gforky/gists{/gist_id}", "starred_url": "https://api.github.com/users/Gforky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Gforky/subscriptions", "organizations_url": "https://api.github.com/users/Gforky/orgs", "repos_url": "https://api.github.com/users/Gforky/repos", "events_url": "https://api.github.com/users/Gforky/events{/privacy}", "received_events_url": "https://api.github.com/users/Gforky/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Hi ! This might be related to #2153 \r\n\r\nYou're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch\r\nI'm opening a PR to fix this and also to figure out how it was not caught in the tests\r\n\r\nEDIT: opened #2201", "> Hi ! This might be related to #2153\r\n> \r\n> Yo...
1,617,968,833,000
1,622,803,055,000
1,622,803,055,000
NONE
null
null
Hi, here is my issue: I initialized a Csv datasetbuilder with specific features: ``` def get_dataset_features(data_args): features = {} if data_args.text_features: features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")}) if data_args.num_features: features.update({text_feature: hf_features.Value("float32") for text_feature in data_args.num_features.strip().split(",")}) if data_args.label_classes: features["label"] = hf_features.ClassLabel(names=data_args.label_classes.strip().split(",")) else: features["label"] = hf_features.Value("float32") return hf_features.Features(features) datasets = load_dataset(extension, data_files=data_files, sep=data_args.delimiter, header=data_args.header, column_names=data_args.column_names.split(",") if data_args.column_names else None, features=get_dataset_features(data_args=data_args)) ``` The `features` is printout as below before `builder_instance.as_dataset` is called: ``` {'label': ClassLabel(num_classes=2, names=['unacceptable', 'acceptable'], names_file=None, id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)} ```` But after the `builder_instance.as_dataset` is called for Csv dataset builder, the `features` is changed to: ``` {'label': Value(dtype='int64', id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)} ``` After digged into the code, I releazed that in `ArrowBasedBuilder._prepare_split`, the DatasetBuilder's info's features will be overwrited by `ArrowWriter`'s `_features`. But `ArrowWriter` is initailized without passing `features`. So my concern is: It's this overwrite must be done, or, should it be an option to pass features in `_prepare_split` function?
https://api.github.com/repos/huggingface/datasets/issues/2200/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2199
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2199/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2199/comments
https://api.github.com/repos/huggingface/datasets/issues/2199/events
https://github.com/huggingface/datasets/pull/2199
854,417,318
MDExOlB1bGxSZXF1ZXN0NjEyMzY0ODU3
2,199
Fix backward compatibility in Dataset.load_from_disk
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq, could you please check if this makes sense? Thanks.", "What about using `_indices_data_files` field in save_to_disk instead of `_indices_files` ?\r\nThis way future datasets can also be reloaded from older versions of the lib\r\n\r\n`_indices_files` was introduced in a recent PR and was not released...
1,617,966,070,000
1,617,983,825,000
1,617,983,825,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2199", "html_url": "https://github.com/huggingface/datasets/pull/2199", "diff_url": "https://github.com/huggingface/datasets/pull/2199.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2199.patch" }
Fix backward compatibility when loading from disk an old dataset saved to disk with indices using key "_indices_data_files". Related to #2195.
https://api.github.com/repos/huggingface/datasets/issues/2199/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2198
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2198/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2198/comments
https://api.github.com/repos/huggingface/datasets/issues/2198/events
https://github.com/huggingface/datasets/pull/2198
854,357,481
MDExOlB1bGxSZXF1ZXN0NjEyMzE0MTIz
2,198
added file_permission in load_dataset
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "From offline discussions: we want to make the permissions handling consistent with `transformers`. However from discussion in https://github.com/huggingface/transformers/pull/11119 it looks like it might not be a good solution to provide this argument. Users should use umask for now, and we'll see how things evol...
1,617,961,146,000
1,618,582,306,000
1,618,582,306,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2198", "html_url": "https://github.com/huggingface/datasets/pull/2198", "diff_url": "https://github.com/huggingface/datasets/pull/2198.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2198.patch" }
As discussed in #2065 I've added `file_permission` argument in `load_dataset`. Added mainly 2 things here: 1) Permission of downloaded datasets when converted to .arrow files can be changed with argument `file_permission` argument in `load_dataset` (default is 0o644 only) 2) Incase the user uses `map` later on to generate another cache file of dataset, it ensures the permissions of newly generated file are similar to that of` *-train.arrow` file inside cache_dir for that dataset.
https://api.github.com/repos/huggingface/datasets/issues/2198/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2197
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2197/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2197/comments
https://api.github.com/repos/huggingface/datasets/issues/2197/events
https://github.com/huggingface/datasets/pull/2197
854,356,559
MDExOlB1bGxSZXF1ZXN0NjEyMzEzMzQw
2,197
fix missing indices_files in load_form_disk
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,961,077,000
1,617,962,080,000
1,617,962,079,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2197", "html_url": "https://github.com/huggingface/datasets/pull/2197", "diff_url": "https://github.com/huggingface/datasets/pull/2197.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2197.patch" }
This should fix #2195 `load_from_disk` was failing if there was no "_indices_files" field in state.json. This can happen if the dataset has no indices mapping
https://api.github.com/repos/huggingface/datasets/issues/2197/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2196
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2196/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2196/comments
https://api.github.com/repos/huggingface/datasets/issues/2196/events
https://github.com/huggingface/datasets/issues/2196
854,126,114
MDU6SXNzdWU4NTQxMjYxMTQ=
2,196
`load_dataset` caches two arrow files?
{ "login": "hwijeen", "id": 29157715, "node_id": "MDQ6VXNlcjI5MTU3NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hwijeen", "html_url": "https://github.com/hwijeen", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "repos_url": "https://api.github.com/users/hwijeen/repos", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
null
[]
null
[ "Hi ! Files that starts with `cache-*` are cached computation files, i.e. they are the cached results of map/filter/cast/etc. operations. For example if you used `map` on your dataset to transform it, then the resulting dataset is going to be stored and cached in a `cache-*` file. These files are used to avoid havi...
1,617,940,159,000
1,618,205,129,000
1,618,205,129,000
NONE
null
null
Hi, I am using datasets to load large json file of 587G. I checked the cached folder and found that there are two arrow files created: * `cache-ed205e500a7dc44c.arrow` - 355G * `json-train.arrow` - 582G Why is the first file created? If I delete it, would I still be able to `load_from_disk`?
https://api.github.com/repos/huggingface/datasets/issues/2196/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2195
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2195/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2195/comments
https://api.github.com/repos/huggingface/datasets/issues/2195/events
https://github.com/huggingface/datasets/issues/2195
854,070,194
MDU6SXNzdWU4NTQwNzAxOTQ=
2,195
KeyError: '_indices_files' in `arrow_dataset.py`
{ "login": "samsontmr", "id": 15007950, "node_id": "MDQ6VXNlcjE1MDA3OTUw", "avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samsontmr", "html_url": "https://github.com/samsontmr", "followers_url": "https://api.github.com/users/samsontmr/followers", "following_url": "https://api.github.com/users/samsontmr/following{/other_user}", "gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}", "starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions", "organizations_url": "https://api.github.com/users/samsontmr/orgs", "repos_url": "https://api.github.com/users/samsontmr/repos", "events_url": "https://api.github.com/users/samsontmr/events{/privacy}", "received_events_url": "https://api.github.com/users/samsontmr/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks for reporting @samsontmr.\r\n\r\nIt seems a backward compatibility issue...", "Thanks @samsontmr this should be fixed on master now\r\n\r\nFeel free to reopen if you're still having issues" ]
1,617,932,232,000
1,617,962,109,000
1,617,962,079,000
NONE
null
null
After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset. Trace: ``` Traceback (most recent call last): File "load_data.py", line 11, in <module> dataset = load_from_disk(SRC) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line 784, in load_from_disk return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 692, in load_from_disk dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 634, in load_from_disk if state["_indices_files"]: KeyError: '_indices_files' ``` I believe this is the line causing the error since there may not be a `_indices_files` key in the older versions: https://github.com/huggingface/datasets/blob/b70141e3c5149430951773aaa0155555c5fb3e76/src/datasets/arrow_dataset.py#L634 May I suggest using `state.get()` instead of directly indexing the dictionary? @lhoestq
https://api.github.com/repos/huggingface/datasets/issues/2195/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2194
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2194/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2194/comments
https://api.github.com/repos/huggingface/datasets/issues/2194/events
https://github.com/huggingface/datasets/issues/2194
853,909,452
MDU6SXNzdWU4NTM5MDk0NTI=
2,194
py3.7: TypeError: can't pickle _LazyModule objects
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "\r\nThis wasn't a `datasets` problem, but `transformers`' and it was solved here https://github.com/huggingface/transformers/pull/11168\r\n" ]
1,617,915,768,000
1,617,987,410,000
1,617,933,177,000
CONTRIBUTOR
null
null
While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install: ``` git clone https://github.com/huggingface/transformers cd transformers pip install -e .[testing] export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \ examples/language-modeling/run_clm.py --model_name_or_path distilgpt2 --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 --do_train --max_train_samples 1 \ --per_device_train_batch_size $BS --output_dir /tmp/test-clm --block_size 128 --logging_steps 1 \ --fp16 ``` ``` Traceback (most recent call last): File "examples/language-modeling/run_clm.py", line 453, in <module> main() File "examples/language-modeling/run_clm.py", line 336, in main load_from_cache_file=not data_args.overwrite_cache, File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in map for k, dataset in self.items() File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp> for k, dataset in self.items() File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1259, in map update_data=update_data, File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 157, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 158, in wrapper self._fingerprint, transform, kwargs_for_fingerprint File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint hasher.update(transform_args[key]) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 57, in update self.m.update(self.hash(value).encode("utf-8")) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 53, in hash return cls.hash_default(value) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 46, in hash_default return cls.hash_bytes(dumps(value)) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 389, in dumps dump(obj, file) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 361, in dump Pickler(file, recurse=True).dump(obj) File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump StockPickler.dump(self, obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 437, in dump self.save(obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 556, in save_function obj=obj, File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 524, in save rv = reduce(self.proto) TypeError: can't pickle _LazyModule objects ``` ``` $ python --version Python 3.7.4 $ python -m torch.utils.collect_env Collecting environment information... PyTorch version: 1.8.0.dev20210110+cu110 Is debug build: False CUDA used to build PyTorch: 11.0 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.2 LTS (x86_64) GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.16.3 ``` Thanks.
https://api.github.com/repos/huggingface/datasets/issues/2194/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2193
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2193/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2193/comments
https://api.github.com/repos/huggingface/datasets/issues/2193/events
https://github.com/huggingface/datasets/issues/2193
853,725,707
MDU6SXNzdWU4NTM3MjU3MDc=
2,193
Filtering/mapping on one column is very slow
{ "login": "norabelrose", "id": 39116809, "node_id": "MDQ6VXNlcjM5MTE2ODA5", "avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4", "gravatar_id": "", "url": "https://api.github.com/users/norabelrose", "html_url": "https://github.com/norabelrose", "followers_url": "https://api.github.com/users/norabelrose/followers", "following_url": "https://api.github.com/users/norabelrose/following{/other_user}", "gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}", "starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions", "organizations_url": "https://api.github.com/users/norabelrose/orgs", "repos_url": "https://api.github.com/users/norabelrose/repos", "events_url": "https://api.github.com/users/norabelrose/events{/privacy}", "received_events_url": "https://api.github.com/users/norabelrose/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
null
[]
null
[ "Hi ! Yes we are working on making `filter` significantly faster. You can look at related PRs here: #2060 #2178 \r\n\r\nI think you can expect to have the fast version of `filter` available next week.\r\n\r\nWe'll make it only select one column, and we'll also make the overall filtering operation way faster by avoi...
1,617,905,774,000
1,619,453,639,000
1,619,453,639,000
CONTRIBUTOR
null
null
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation. I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_columns=['num_tokens']`, it seems that the entirety of each row is loaded into memory, which makes the operation take much longer than it should. Indeed, `filter` currently just calls `map`, and I found that in `_map_single` on lines 1690-1704 of `arrow_dataset.py`, the method is just grabbing slices of _all the rows_ of the dataset and then passing only the specified columns to the map function. It seems that, when the user passes a value for `input_columns`, the `map` function should create a temporary pyarrow table by selecting just those columns, and then get slices from that table. Or something like that— I'm not very familiar with the pyarrow API. I know that in the meantime I can sort of get around this by simply only returning the rows that match my filter criterion from the tokenizing function I pass to `map()`, but I actually _also_ want to map on just the `num_tokens` column in order to compute batches with a roughly uniform number of tokens per batch. I would also ideally like to be able to change my minimum and maximum article lengths without having to re-tokenize the entire dataset. PS: This is definitely not a "dataset request." I'm realizing that I don't actually know how to remove labels from my own issues on other people's repos, if that is even possible.
https://api.github.com/repos/huggingface/datasets/issues/2193/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2192
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2192/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2192/comments
https://api.github.com/repos/huggingface/datasets/issues/2192/events
https://github.com/huggingface/datasets/pull/2192
853,547,910
MDExOlB1bGxSZXF1ZXN0NjExNjE5NTY0
2,192
Fix typo in huggingface hub
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,892,944,000
1,617,896,861,000
1,617,896,860,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2192", "html_url": "https://github.com/huggingface/datasets/pull/2192", "diff_url": "https://github.com/huggingface/datasets/pull/2192.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2192.patch" }
pip knows how to resolve to `huggingface_hub`, but conda doesn't! The `packaging` dependency is also required for the build to complete.
https://api.github.com/repos/huggingface/datasets/issues/2192/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2191
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2191/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2191/comments
https://api.github.com/repos/huggingface/datasets/issues/2191/events
https://github.com/huggingface/datasets/pull/2191
853,364,204
MDExOlB1bGxSZXF1ZXN0NjExNDY1Nzc0
2,191
Refactorize tests to use Dataset as context manager
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 2851292821, "node_id": "MDU6TGFiZWwyODUxMjkyODIx", "url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring", "name": "refactoring", "color": "B67A40", "default": false, "description": "Restructuring existing code without changing its external behavior" } ]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/1", "html_url": "https://github.com/huggingface/datasets/milestone/1", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels", "id": 6644198, "node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==", "number": 1, "title": "1.6", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 4, "state": "closed", "created_at": 1617973671000, "updated_at": 1618937446000, "due_on": 1618556400000, "closed_at": 1618937446000 }
[ "I find very interesting that idea of using a fixture instead!\r\n\r\nLet me rework a little bit this PR, @lhoestq.", "@lhoestq, as this is a big refactoring, I had many problems to solve the conflicts with the master branch...\r\n\r\nTherefore, I think it is better to merge this as it is, and then to make other ...
1,617,880,864,000
1,618,818,791,000
1,618,818,790,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2191", "html_url": "https://github.com/huggingface/datasets/pull/2191", "diff_url": "https://github.com/huggingface/datasets/pull/2191.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2191.patch" }
Refactorize Dataset tests to use Dataset as context manager.
https://api.github.com/repos/huggingface/datasets/issues/2191/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2190/comments
https://api.github.com/repos/huggingface/datasets/issues/2190/events
https://github.com/huggingface/datasets/issues/2190
853,181,564
MDU6SXNzdWU4NTMxODE1NjQ=
2,190
News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs
{ "login": "anassalamah", "id": 8571003, "node_id": "MDQ6VXNlcjg1NzEwMDM=", "avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anassalamah", "html_url": "https://github.com/anassalamah", "followers_url": "https://api.github.com/users/anassalamah/followers", "following_url": "https://api.github.com/users/anassalamah/following{/other_user}", "gists_url": "https://api.github.com/users/anassalamah/gists{/gist_id}", "starred_url": "https://api.github.com/users/anassalamah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anassalamah/subscriptions", "organizations_url": "https://api.github.com/users/anassalamah/orgs", "repos_url": "https://api.github.com/users/anassalamah/repos", "events_url": "https://api.github.com/users/anassalamah/events{/privacy}", "received_events_url": "https://api.github.com/users/anassalamah/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @anassalamah,\r\n\r\nCould you please try with this:\r\n```python\r\ntrain_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[:98%]')\r\nval_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[98%:]')\r\n```", "Hello @albertvillanova, \r\n\r\nThanks for...
1,617,868,423,000
1,621,850,635,000
1,621,850,635,000
NONE
null
null
I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi. ``` train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]') val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]') # filtering out examples that are not ar-en translations but ar-hi val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True) ``` * I'm fairly new to using datasets so I might be doing something wrong
https://api.github.com/repos/huggingface/datasets/issues/2190/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2189
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2189/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2189/comments
https://api.github.com/repos/huggingface/datasets/issues/2189/events
https://github.com/huggingface/datasets/issues/2189
853,052,891
MDU6SXNzdWU4NTMwNTI4OTE=
2,189
save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object.
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! We refactored save_to_disk in #2025 so this doesn't happen.\r\nFeel free to try it on master for now\r\nWe'll do a new release soon" ]
1,617,856,973,000
1,618,408,625,000
null
NONE
null
null
As you can see, it saves the entire dataset. @lhoestq You can check by going through the following example, ``` from datasets import load_from_disk,concatenate_datasets loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset') n=20 kb_list=[loaded_data.shard(n, i, contiguous=True) for i in range(n)] final_dataset=concatenate_datasets([kb_list[1],kb_list[2]]) final_dataset.save_to_disk('/home/gsir059/haha/k.arrow') ```
https://api.github.com/repos/huggingface/datasets/issues/2189/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2188
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2188/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2188/comments
https://api.github.com/repos/huggingface/datasets/issues/2188/events
https://github.com/huggingface/datasets/issues/2188
853,044,166
MDU6SXNzdWU4NTMwNDQxNjY=
2,188
Duplicate data in Timit dataset
{ "login": "BHM-RB", "id": 78190188, "node_id": "MDQ6VXNlcjc4MTkwMTg4", "avatar_url": "https://avatars.githubusercontent.com/u/78190188?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BHM-RB", "html_url": "https://github.com/BHM-RB", "followers_url": "https://api.github.com/users/BHM-RB/followers", "following_url": "https://api.github.com/users/BHM-RB/following{/other_user}", "gists_url": "https://api.github.com/users/BHM-RB/gists{/gist_id}", "starred_url": "https://api.github.com/users/BHM-RB/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BHM-RB/subscriptions", "organizations_url": "https://api.github.com/users/BHM-RB/orgs", "repos_url": "https://api.github.com/users/BHM-RB/repos", "events_url": "https://api.github.com/users/BHM-RB/events{/privacy}", "received_events_url": "https://api.github.com/users/BHM-RB/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting\r\nIf I recall correctly this has been recently fixed #1995\r\nCan you try to upgrade your local version of `datasets` ?\r\n```\r\npip install --upgrade datasets\r\n```", "Hi Ihoestq,\r\n\r\nThank you. It works after upgrading the datasets\r\n" ]
1,617,855,714,000
1,617,883,999,000
1,617,883,999,000
NONE
null
null
I ran a simple code to list all texts in Timit dataset and the texts were all the same. Is this dataset corrupted? **Code:** timit = load_dataset("timit_asr") print(*timit['train']['text'], sep='\n') **Result:** Would such an act of refusal be useful? Would such an act of refusal be useful? Would such an act of refusal be useful? Would such an act of refusal be useful? ... ... Would such an act of refusal be useful?
https://api.github.com/repos/huggingface/datasets/issues/2188/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2187
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2187/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2187/comments
https://api.github.com/repos/huggingface/datasets/issues/2187/events
https://github.com/huggingface/datasets/issues/2187
852,939,736
MDU6SXNzdWU4NTI5Mzk3MzY=
2,187
Question (potential issue?) related to datasets caching
{ "login": "ioana-blue", "id": 17202292, "node_id": "MDQ6VXNlcjE3MjAyMjky", "avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ioana-blue", "html_url": "https://github.com/ioana-blue", "followers_url": "https://api.github.com/users/ioana-blue/followers", "following_url": "https://api.github.com/users/ioana-blue/following{/other_user}", "gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}", "starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions", "organizations_url": "https://api.github.com/users/ioana-blue/orgs", "repos_url": "https://api.github.com/users/ioana-blue/repos", "events_url": "https://api.github.com/users/ioana-blue/events{/privacy}", "received_events_url": "https://api.github.com/users/ioana-blue/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
open
false
null
[]
null
[ "An educated guess: does this refer to the fact that depending on the custom column names in the dataset files (csv in this case), there is a dataset loader being created? and this dataset loader - using the \"custom data configuration\" is used among all jobs running using this particular csv files? (thinking out ...
1,617,840,988,000
1,618,412,158,000
null
NONE
null
null
I thought I had disabled datasets caching in my code, as follows: ``` from datasets import set_caching_enabled ... def main(): # disable caching in datasets set_caching_enabled(False) ``` However, in my log files I see messages like the following: ``` 04/07/2021 18:34:42 - WARNING - datasets.builder - Using custom data configuration default-888a87931cbc5877 04/07/2021 18:34:42 - WARNING - datasets.builder - Reusing dataset csv (xxxx/cache-transformers/datasets/csv/default-888a87931cbc5877/0.0.0/965b6429be0fc05f975b608ce64e1fa941cc8fb4f30629b523d2390f3c0e1a93 ``` Can you please let me know what this reusing dataset csv means? I wouldn't expect any reusing with the datasets caching disabled. Thank you!
https://api.github.com/repos/huggingface/datasets/issues/2187/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2186
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2186/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2186/comments
https://api.github.com/repos/huggingface/datasets/issues/2186/events
https://github.com/huggingface/datasets/pull/2186
852,840,819
MDExOlB1bGxSZXF1ZXN0NjExMDMxNzE0
2,186
GEM: new challenge sets
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "cc @sebastiangehrmann" ]
1,617,831,547,000
1,617,832,595,000
1,617,832,595,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2186", "html_url": "https://github.com/huggingface/datasets/pull/2186", "diff_url": "https://github.com/huggingface/datasets/pull/2186.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2186.patch" }
This PR updates the GEM dataset to: - remove extraneous fields in WikiAuto after https://github.com/huggingface/datasets/pull/2171 fixed the source - add context and services to Schema Guided Dialog - Add new or update challenge sets for MLSUM ES and DE, XSUM, and SGD
https://api.github.com/repos/huggingface/datasets/issues/2186/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2185
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2185/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2185/comments
https://api.github.com/repos/huggingface/datasets/issues/2185/events
https://github.com/huggingface/datasets/issues/2185
852,684,395
MDU6SXNzdWU4NTI2ODQzOTU=
2,185
.map() and distributed training
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi, one workaround would be to save the mapped(tokenized in your case) file using `save_to_disk`, and having each process load this file using `load_from_disk`. This is what I am doing, and in this case, I turn off the ability to automatically load from the cache.\r\n\r\nAlso, multiprocessing the map function seem...
1,617,819,734,000
1,617,982,711,000
1,617,982,711,000
MEMBER
null
null
Hi, I have a question regarding distributed training and the `.map` call on a dataset. I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`. `dataset` is then tokenized: ```python datasets = load_from_disk(dataset_path=my_path) [...] def tokenize_function(examples): return tokenizer(examples[text_column_name]) logger.info("Mapping dataset to tokenized dataset.") tokenized_datasets = datasets.map( tokenize_function, batched=True, num_proc=preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=True, ) ``` I am using 31 workers (`preprocessing_num_workers=31`) and thus it creates 31 `cache*.arrow` files in `my_path/train` (there is only a train split). When I relaunch the script, the map is tokenization is skipped in favor of loading the 31 previously cached files, and that's perfect. Everything so far was done by launching a **single process script**. I now launch the same training script in **distributed mode** (`pytorch -m torch.distributed.launch --nproc_per_node 2`). However, once it reaches the map call, it re-does the tokenization... instead of loading the 31 cached files. I tried adding the `cache_file_name` argument: `cache_file_name={"train": my_path/one_of_the_arrow_file}`, but I can't give the 31 cached files, so it probably isn't the right way to do it. **My question: what is the best way to load cached files if they were pre-processed and dumped in multiple arrow files?** It seems automatically handled for single processes but fails on distributed training. - I am following the same structure as the examples of transformers (more specifically [run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) in my case) - I am using 1.5.0 version of datasets if that matters.
https://api.github.com/repos/huggingface/datasets/issues/2185/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2184
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2184/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2184/comments
https://api.github.com/repos/huggingface/datasets/issues/2184/events
https://github.com/huggingface/datasets/pull/2184
852,597,258
MDExOlB1bGxSZXF1ZXN0NjEwODIxMTc0
2,184
Implementation of class_encode_column
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Made the required changes @lhoestq , sorry it took so much time!" ]
1,617,814,063,000
1,618,573,477,000
1,618,572,419,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2184", "html_url": "https://github.com/huggingface/datasets/pull/2184", "diff_url": "https://github.com/huggingface/datasets/pull/2184.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2184.patch" }
Addresses #2176 I'm happy to discuss the API and internals!
https://api.github.com/repos/huggingface/datasets/issues/2184/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2183
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2183/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2183/comments
https://api.github.com/repos/huggingface/datasets/issues/2183/events
https://github.com/huggingface/datasets/pull/2183
852,518,411
MDExOlB1bGxSZXF1ZXN0NjEwNzU3MjUz
2,183
Fix s3fs tests for py36 and py37+
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,808,631,000
1,617,872,085,000
1,617,872,084,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2183", "html_url": "https://github.com/huggingface/datasets/pull/2183", "diff_url": "https://github.com/huggingface/datasets/pull/2183.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2183.patch" }
Recently several changes happened: 1. latest versions of `fsspec` require python>3.7 for async features 2. `s3fs` added a dependency on `aiobotocore`, which is not compatible with the `moto` s3 mock context manager This PR fixes both issues, by pinning `fsspec` and `s3fs` for python 3.6, and by using `moto` in server mode to support running the tests on python>=3.7 with the latest version of `fsspec` and `s3fs`. cc @philschmid
https://api.github.com/repos/huggingface/datasets/issues/2183/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2182
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2182/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2182/comments
https://api.github.com/repos/huggingface/datasets/issues/2182/events
https://github.com/huggingface/datasets/pull/2182
852,384,872
MDExOlB1bGxSZXF1ZXN0NjEwNjQ2MDIy
2,182
Set default in-memory value depending on the dataset size
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/1", "html_url": "https://github.com/huggingface/datasets/milestone/1", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels", "id": 6644198, "node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==", "number": 1, "title": "1.6", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 4, "state": "closed", "created_at": 1617973671000, "updated_at": 1618937446000, "due_on": 1618556400000, "closed_at": 1618937446000 }
[ "I ping @krandiash to keep him up to date.", "TODO:\r\n- [x] Add a section in the docs about this.\r\n- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~", "@lhoestq I have a questi...
1,617,800,418,000
1,618,928,412,000
1,618,913,044,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2182", "html_url": "https://github.com/huggingface/datasets/pull/2182", "diff_url": "https://github.com/huggingface/datasets/pull/2182.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2182.patch" }
Set a default value for `in_memory` depending on the size of the dataset to be loaded. Close #2179. TODO: - [x] Add a section in the docs about this. - ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~
https://api.github.com/repos/huggingface/datasets/issues/2182/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2181
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2181/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2181/comments
https://api.github.com/repos/huggingface/datasets/issues/2181/events
https://github.com/huggingface/datasets/issues/2181
852,261,607
MDU6SXNzdWU4NTIyNjE2MDc=
2,181
Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
{ "login": "hwijeen", "id": 29157715, "node_id": "MDQ6VXNlcjI5MTU3NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hwijeen", "html_url": "https://github.com/hwijeen", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "repos_url": "https://api.github.com/users/hwijeen/repos", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Can you try to increase the block size ? For example\r\n```python\r\nblock_size_10MB = 10<<20\r\nload_dataset(\"json\", ..., block_size=block_size_10MB)\r\n```\r\nThe block size corresponds to how much bytes to process at a time from the input stream.\r\nThis will determine multi-threading granularity as well...
1,617,791,206,000
1,618,211,755,000
1,618,211,755,000
NONE
null
null
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project. When loading a huge json file of 500GB, pyarrow complains as follows: ``` Traceback (most recent call last): File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 531, in incomplete_dir yield tmp_dir File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 650, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 1027, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose): File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__ for obj in iterable: File "/app/.cache/huggingface/modules/datasets_modules/datasets/json/9498524fd296a6cca99c66d6c5be507d1c0991f5a814e535b507f4a66096a641/json.py", line 83, in _generate_tables parse_options=self.config.pa_parse_options, File "pyarrow/_json.pyx", line 247, in pyarrow._json.read_json File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ``` When using only a small portion of the sample file, say first 100 lines, it works perfectly well.. I see that it is the error from pyarrow, but could you give me a hint or possible solutions? #369 describes the same error and #372 claims to have fixed the issue, but I have no clue why I am still getting this one. Thanks in advance!
https://api.github.com/repos/huggingface/datasets/issues/2181/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2180
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2180/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2180/comments
https://api.github.com/repos/huggingface/datasets/issues/2180/events
https://github.com/huggingface/datasets/pull/2180
852,258,635
MDExOlB1bGxSZXF1ZXN0NjEwNTQxOTA2
2,180
Add tel to xtreme tatoeba
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,790,995,000
1,617,810,635,000
1,617,810,634,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2180", "html_url": "https://github.com/huggingface/datasets/pull/2180", "diff_url": "https://github.com/huggingface/datasets/pull/2180.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2180.patch" }
This should fix issue #2149
https://api.github.com/repos/huggingface/datasets/issues/2180/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2179
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2179/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2179/comments
https://api.github.com/repos/huggingface/datasets/issues/2179/events
https://github.com/huggingface/datasets/issues/2179
852,237,957
MDU6SXNzdWU4NTIyMzc5NTc=
2,179
Load small datasets in-memory instead of using memory map
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6...
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[]
1,617,789,496,000
1,618,913,044,000
1,618,913,043,000
MEMBER
null
null
Currently all datasets are loaded using memory mapping by default in `load_dataset`. However this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and: - its memory footprint would be small so it's ok - in-memory computations/queries would be faster - the caching on-disk would be disabled, making computations even faster (no I/O bound because of the disk) - but running the same computation a second time would recompute everything since there would be no cached results on-disk. But this is probably fine since computations would be fast anyway + users should be able to provide a cache filename if needed. Therefore, maybe the default behavior of `load_dataset` should be to load small datasets in-memory and big datasets using memory mapping.
https://api.github.com/repos/huggingface/datasets/issues/2179/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2178
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2178/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2178/comments
https://api.github.com/repos/huggingface/datasets/issues/2178/events
https://github.com/huggingface/datasets/pull/2178
852,215,058
MDExOlB1bGxSZXF1ZXN0NjEwNTA1Mjg1
2,178
Fix cast memory usage by using map on subtables
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/1", "html_url": "https://github.com/huggingface/datasets/milestone/1", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels", "id": 6644198, "node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==", "number": 1, "title": "1.6", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 4, "state": "closed", "created_at": 1617973671000, "updated_at": 1618937446000, "due_on": 1618556400000, "closed_at": 1618937446000 }
[ "I addressed your comments about the docstrings and the output validation :)", "I updated the bleurt mocking method and bleurt test is passing now.\r\nI also ran the slow tests and they are passing for bleurt.", "Thanks @lhoestq and @albertvillanova !" ]
1,617,787,850,000
1,618,928,444,000
1,618,306,096,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2178", "html_url": "https://github.com/huggingface/datasets/pull/2178", "diff_url": "https://github.com/huggingface/datasets/pull/2178.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2178.patch" }
The `cast` operation on a pyarrow Table may create new arrays in memory. This is an issue since users expect memory mapped datasets to not fill up the RAM. To fix that I used `map` to write a new arrow file on disk when cast is used. To make things more convenient I introduced the `arrow` formatting of a dataset, to make it return pyarrow tables instead of python dicts. This way one can use pyarrow transforms directly when using `map`. edit: we'll use the same mechanism for `filter`
https://api.github.com/repos/huggingface/datasets/issues/2178/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2177
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2177/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2177/comments
https://api.github.com/repos/huggingface/datasets/issues/2177/events
https://github.com/huggingface/datasets/pull/2177
852,065,307
MDExOlB1bGxSZXF1ZXN0NjEwMzc5MDYx
2,177
add social thumbnial
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,777,606,000
1,617,783,361,000
1,617,783,361,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2177", "html_url": "https://github.com/huggingface/datasets/pull/2177", "diff_url": "https://github.com/huggingface/datasets/pull/2177.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2177.patch" }
# What does this PR do? I added OpenGraph/ Twitter Card support to the docs to create nice social thumbnails. ![Bildschirmfoto 2021-04-07 um 08 36 50](https://user-images.githubusercontent.com/32632186/113821698-bac2ce80-977c-11eb-81aa-d8f16355857e.png) To be able to add these I needed to install `sphinxext-opengraph`. I came across this [issue](https://github.com/readthedocs/readthedocs.org/issues/1758) on the readthedocs repo saying that since someone has built this plugin they are not integrating and providing documentation to it. That's why I added it for creating the documentation. The repository can be found [here](https://github.com/wpilibsuite/sphinxext-opengraph/tree/main). P.S. It seemed that `make style` never ran for `docs/` i hope the changes are okay otherwise I'll revert it.
https://api.github.com/repos/huggingface/datasets/issues/2177/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2176
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2176/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2176/comments
https://api.github.com/repos/huggingface/datasets/issues/2176/events
https://github.com/huggingface/datasets/issues/2176
851,865,795
MDU6SXNzdWU4NTE4NjU3OTU=
2,176
Converting a Value to a ClassLabel
{ "login": "nelson-liu", "id": 7272031, "node_id": "MDQ6VXNlcjcyNzIwMzE=", "avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nelson-liu", "html_url": "https://github.com/nelson-liu", "followers_url": "https://api.github.com/users/nelson-liu/followers", "following_url": "https://api.github.com/users/nelson-liu/following{/other_user}", "gists_url": "https://api.github.com/users/nelson-liu/gists{/gist_id}", "starred_url": "https://api.github.com/users/nelson-liu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nelson-liu/subscriptions", "organizations_url": "https://api.github.com/users/nelson-liu/orgs", "repos_url": "https://api.github.com/users/nelson-liu/repos", "events_url": "https://api.github.com/users/nelson-liu/events{/privacy}", "received_events_url": "https://api.github.com/users/nelson-liu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi @nelson-liu!\r\nHere is what I do to convert a string to class label:\r\n\r\n```python\r\nfrom datasets import load_dataset, features\r\n\r\n\r\ndset = load_dataset(...)\r\ncol_name = \"the string column name\"\r\n\r\nclass_names = dset.unique(col_name)\r\nclass_feature = features.ClassLabel(names=sorted(class...
1,617,749,656,000
1,618,827,034,000
null
NONE
null
null
Hi! In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.` Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks!
https://api.github.com/repos/huggingface/datasets/issues/2176/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2175
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2175/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2175/comments
https://api.github.com/repos/huggingface/datasets/issues/2175/events
https://github.com/huggingface/datasets/issues/2175
851,836,096
MDU6SXNzdWU4NTE4MzYwOTY=
2,175
dataset.search_batch() function outputs all -1 indices sometime.
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Actually, I found the answer [here](https://github.com/facebookresearch/faiss/wiki/FAQ#what-does-it-mean-when-a-search-returns--1-ids). \r\n\r\nSo we have to do some modifications to the code for instances where the index doesn't retrieve any IDs.", "@lhoestq @patrickvonplaten \r\n\r\nI also found another short...
1,617,745,849,000
1,618,575,676,000
1,618,575,675,000
NONE
null
null
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**. During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py#L231) an error issue when all retrieved indices are -1. Please refer to the screenshot of a PID worker. ![image](https://user-images.githubusercontent.com/16892570/113782387-37a67600-9786-11eb-9c29-acad661a9648.png) Here, my retrieve batch size is 2 and n_docs is 5. I can solve this by working around np. stack, but I want to ask, why we get an output index of -1. Do you have any idea :) ? Is this a problem of the index, where the faiss can't find any similar vector? Is there documentation on the output index being -1? @lhoestq
https://api.github.com/repos/huggingface/datasets/issues/2175/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2174
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2174/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2174/comments
https://api.github.com/repos/huggingface/datasets/issues/2174/events
https://github.com/huggingface/datasets/pull/2174
851,383,675
MDExOlB1bGxSZXF1ZXN0NjA5ODE2OTQ2
2,174
Pin docutils for better doc
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,712,820,000
1,617,713,753,000
1,617,713,753,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2174", "html_url": "https://github.com/huggingface/datasets/pull/2174", "diff_url": "https://github.com/huggingface/datasets/pull/2174.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2174.patch" }
The latest release of docutils make the navbar in the documentation weird and the Markdown wrongly interpreted: ![image](https://user-images.githubusercontent.com/35901082/113711773-5be55280-96b3-11eb-9b3b-9794f17709aa.png) We had the same problem in Transformers and solved it by pinning docutils (a dep of sphinx). You can see the version after the change [here](https://32769-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html).
https://api.github.com/repos/huggingface/datasets/issues/2174/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2173
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2173/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2173/comments
https://api.github.com/repos/huggingface/datasets/issues/2173/events
https://github.com/huggingface/datasets/pull/2173
851,359,284
MDExOlB1bGxSZXF1ZXN0NjA5Nzk2NzI2
2,173
Add OpenSLR dataset
{ "login": "cahya-wirawan", "id": 7669893, "node_id": "MDQ6VXNlcjc2Njk4OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cahya-wirawan", "html_url": "https://github.com/cahya-wirawan", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,710,914,000
1,618,246,486,000
1,618,246,486,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2173", "html_url": "https://github.com/huggingface/datasets/pull/2173", "diff_url": "https://github.com/huggingface/datasets/pull/2173.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2173.patch" }
OpenSLR (https://openslr.org/) is a site devoted to hosting speech and language resources, such as training corpora for speech recognition, and software related to speech recognition. There are around 80 speech datasets listed in OpenSLR, currently this PR includes only 9 speech datasets SLR41, SLR42, SLR43, SLR44, SLR63, SLR64, SLR65, SLR66 and SLR69 (Javanese, Khmer, Nepali and Sundanese, Malayalam, Marathi, Tamil, Telugu and Catalan). I can add other speech datasets gradually next time.
https://api.github.com/repos/huggingface/datasets/issues/2173/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2172
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2172/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2172/comments
https://api.github.com/repos/huggingface/datasets/issues/2172/events
https://github.com/huggingface/datasets/pull/2172
851,229,399
MDExOlB1bGxSZXF1ZXN0NjA5Njg4ODgx
2,172
Pin fsspec lower than 0.9.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,700,749,000
1,617,702,567,000
1,617,702,566,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2172", "html_url": "https://github.com/huggingface/datasets/pull/2172", "diff_url": "https://github.com/huggingface/datasets/pull/2172.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2172.patch" }
Today's release of `fsspec` 0.9.0 implied a new release of `s3fs` 0.6.0 but this version breaks the CI (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/5312/workflows/490f3240-cd1c-4dd1-bb60-b416771c5584/jobs/32734) for example) I'm pinning `fsspec` until this has been resolved
https://api.github.com/repos/huggingface/datasets/issues/2172/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2171
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2171/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2171/comments
https://api.github.com/repos/huggingface/datasets/issues/2171/events
https://github.com/huggingface/datasets/pull/2171
851,090,662
MDExOlB1bGxSZXF1ZXN0NjA5NTY4MDcw
2,171
Fixed the link to wikiauto training data.
{ "login": "mounicam", "id": 11708999, "node_id": "MDQ6VXNlcjExNzA4OTk5", "avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mounicam", "html_url": "https://github.com/mounicam", "followers_url": "https://api.github.com/users/mounicam/followers", "following_url": "https://api.github.com/users/mounicam/following{/other_user}", "gists_url": "https://api.github.com/users/mounicam/gists{/gist_id}", "starred_url": "https://api.github.com/users/mounicam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mounicam/subscriptions", "organizations_url": "https://api.github.com/users/mounicam/orgs", "repos_url": "https://api.github.com/users/mounicam/repos", "events_url": "https://api.github.com/users/mounicam/events{/privacy}", "received_events_url": "https://api.github.com/users/mounicam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Also you can ignore the CI failing on `docs`, this has been fixed on master :)", "@lhoestq I need to update other stuff on GEM later today too, so will merge this one and remove columns in the next PR!", "Ok !" ]
1,617,693,191,000
1,617,725,142,000
1,617,725,109,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2171", "html_url": "https://github.com/huggingface/datasets/pull/2171", "diff_url": "https://github.com/huggingface/datasets/pull/2171.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2171.patch" }
https://api.github.com/repos/huggingface/datasets/issues/2171/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2170
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2170/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2170/comments
https://api.github.com/repos/huggingface/datasets/issues/2170/events
https://github.com/huggingface/datasets/issues/2170
850,913,228
MDU6SXNzdWU4NTA5MTMyMjg=
2,170
Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date
{ "login": "leezu", "id": 946903, "node_id": "MDQ6VXNlcjk0NjkwMw==", "avatar_url": "https://avatars.githubusercontent.com/u/946903?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leezu", "html_url": "https://github.com/leezu", "followers_url": "https://api.github.com/users/leezu/followers", "following_url": "https://api.github.com/users/leezu/following{/other_user}", "gists_url": "https://api.github.com/users/leezu/gists{/gist_id}", "starred_url": "https://api.github.com/users/leezu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leezu/subscriptions", "organizations_url": "https://api.github.com/users/leezu/orgs", "repos_url": "https://api.github.com/users/leezu/repos", "events_url": "https://api.github.com/users/leezu/events{/privacy}", "received_events_url": "https://api.github.com/users/leezu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "It seems that this can be fixed from user's end by including a `date` argument, like this:\r\n\r\n`dataset = datasets.load_dataset('wikipedia', '20200501.en', date='20210420')`\r\n\r\nYou can get available dates from [here](https://dumps.wikimedia.org/enwiki/).\r\n\r\nThis is not a proper fix however as all the fi...
1,617,678,798,000
1,623,805,850,000
null
NONE
null
null
Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides ``` 20201220/ 02-Feb-2021 01:36 - 20210101/ 21-Feb-2021 01:26 - 20210120/ 02-Mar-2021 01:25 - 20210201/ 21-Mar-2021 01:26 - 20210220/ 02-Apr-2021 01:26 - 20210301/ 03-Mar-2021 08:10 - 20210320/ 21-Mar-2021 18:13 - 20210401/ 03-Apr-2021 10:08 - latest/ 03-Apr-2021 10:08 - ``` However, the wikipedia dataset provided in the library, only supports the following configs, none of which are applicable anymore when disregarding the cached datasets: ``` ValueError: BuilderConfig 20210401.ko not found. Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu'] ``` The cached datasets: ``` % aws s3 --no-sign-request --endpoint-url https://storage.googleapis.com ls s3://huggingface-nlp/cache/datasets/wikipedia/ PRE 20200501.de/ PRE 20200501.en/ PRE 20200501.fr/ PRE 20200501.frr/ PRE 20200501.it/ PRE 20200501.simple/ ```
https://api.github.com/repos/huggingface/datasets/issues/2170/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2169
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2169/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2169/comments
https://api.github.com/repos/huggingface/datasets/issues/2169/events
https://github.com/huggingface/datasets/pull/2169
850,456,180
MDExOlB1bGxSZXF1ZXN0NjA5MDI2ODUz
2,169
Updated WER metric implementation to avoid memory issues
{ "login": "diego-fustes", "id": 5707233, "node_id": "MDQ6VXNlcjU3MDcyMzM=", "avatar_url": "https://avatars.githubusercontent.com/u/5707233?v=4", "gravatar_id": "", "url": "https://api.github.com/users/diego-fustes", "html_url": "https://github.com/diego-fustes", "followers_url": "https://api.github.com/users/diego-fustes/followers", "following_url": "https://api.github.com/users/diego-fustes/following{/other_user}", "gists_url": "https://api.github.com/users/diego-fustes/gists{/gist_id}", "starred_url": "https://api.github.com/users/diego-fustes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/diego-fustes/subscriptions", "organizations_url": "https://api.github.com/users/diego-fustes/orgs", "repos_url": "https://api.github.com/users/diego-fustes/repos", "events_url": "https://api.github.com/users/diego-fustes/events{/privacy}", "received_events_url": "https://api.github.com/users/diego-fustes/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for suggesting this fix \r\nUnfortunately it looks like it's already been fixed by #2111 \r\n\r\nFeel free to share your thoughts about this PR !\r\n\r\nI'm closing this one if you don't mind." ]
1,617,637,400,000
1,617,721,378,000
1,617,721,378,000
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2169", "html_url": "https://github.com/huggingface/datasets/pull/2169", "diff_url": "https://github.com/huggingface/datasets/pull/2169.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2169.patch" }
This is in order to fix this issue: https://github.com/huggingface/datasets/issues/2078
https://api.github.com/repos/huggingface/datasets/issues/2169/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2168
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2168/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2168/comments
https://api.github.com/repos/huggingface/datasets/issues/2168/events
https://github.com/huggingface/datasets/pull/2168
849,957,941
MDExOlB1bGxSZXF1ZXN0NjA4NjA4Nzg5
2,168
Preserve split type when realoding dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for diving into this !\r\n\r\nBefore going further, I just want to make sure if using `eval` is the right solution\r\nPersonally I'm not a big fan of `eval` since it has many security concerns. Also storing string representations of python objects in the json files is not ideal either IMO, so maybe it's pos...
1,617,569,181,000
1,618,829,825,000
1,618,823,335,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2168", "html_url": "https://github.com/huggingface/datasets/pull/2168", "diff_url": "https://github.com/huggingface/datasets/pull/2168.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2168.patch" }
Fixes #2167 Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO. In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module: ```python from . import arrow_reader # gives us access to ReadInstruction and _RelativeInstruction from . import splits # gives us access to NamedSplit ``` and then define the `eval` globals as follows: ```python {**arrow_reader.__dict__, **splits.__dict__} ```
https://api.github.com/repos/huggingface/datasets/issues/2168/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2167
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2167/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2167/comments
https://api.github.com/repos/huggingface/datasets/issues/2167/events
https://github.com/huggingface/datasets/issues/2167
849,944,891
MDU6SXNzdWU4NDk5NDQ4OTE=
2,167
Split type not preserved when reloading the dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,564,594,000
1,618,823,335,000
1,618,823,335,000
CONTRIBUTOR
null
null
A minimal reproducible example: ```python >>> from datasets import load_dataset, Dataset >>> dset = load_dataset("sst", split="train") >>> dset.save_to_disk("sst") >>> type(dset.split) <class 'datasets.splits.NamedSplit'> >>> dset = Dataset.load_from_disk("sst") >>> type(dset.split) # NamedSplit expected <class 'str'> ``` It seems like this bug was introduced in #2025.
https://api.github.com/repos/huggingface/datasets/issues/2167/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2166
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2166/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2166/comments
https://api.github.com/repos/huggingface/datasets/issues/2166/events
https://github.com/huggingface/datasets/issues/2166
849,778,545
MDU6SXNzdWU4NDk3Nzg1NDU=
2,166
Regarding Test Sets for the GEM datasets
{ "login": "vyraun", "id": 17217068, "node_id": "MDQ6VXNlcjE3MjE3MDY4", "avatar_url": "https://avatars.githubusercontent.com/u/17217068?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vyraun", "html_url": "https://github.com/vyraun", "followers_url": "https://api.github.com/users/vyraun/followers", "following_url": "https://api.github.com/users/vyraun/following{/other_user}", "gists_url": "https://api.github.com/users/vyraun/gists{/gist_id}", "starred_url": "https://api.github.com/users/vyraun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vyraun/subscriptions", "organizations_url": "https://api.github.com/users/vyraun/orgs", "repos_url": "https://api.github.com/users/vyraun/repos", "events_url": "https://api.github.com/users/vyraun/events{/privacy}", "received_events_url": "https://api.github.com/users/vyraun/received_events", "type": "User", "site_admin": false }
[ { "id": 2067401494, "node_id": "MDU6TGFiZWwyMDY3NDAxNDk0", "url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion", "name": "Dataset discussion", "color": "72f99f", "default": false, "description": "Discussions on the datasets" } ]
closed
false
null
[]
null
[ "Hi @vyraun ! The test references for CommonGen are not publicly available: you can reach out to the original dataset authors if you would like to ask for them, but we will not be releasing them as part of GEM (March 31st was the release date for the test set inputs, references are incidentally released for some of...
1,617,501,765,000
1,617,696,792,000
1,617,696,792,000
NONE
null
null
@yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)? e.g. ``` from datasets import load_dataset DATASET_NAME="common_gen" data = load_dataset("gem", DATASET_NAME) ``` The test set doesn't have the target or references. ``` data['test'][0] {'concept_set_id': 0, 'concepts': ['drill', 'field', 'run', 'team'], 'gem_id': 'common_gen-test-0', 'gem_parent_id': 'common_gen-test-0', 'references': [], 'target': ''} ```
https://api.github.com/repos/huggingface/datasets/issues/2166/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2165
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2165/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2165/comments
https://api.github.com/repos/huggingface/datasets/issues/2165/events
https://github.com/huggingface/datasets/issues/2165
849,771,665
MDU6SXNzdWU4NDk3NzE2NjU=
2,165
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
{ "login": "y-rokutan", "id": 24562381, "node_id": "MDQ6VXNlcjI0NTYyMzgx", "avatar_url": "https://avatars.githubusercontent.com/u/24562381?v=4", "gravatar_id": "", "url": "https://api.github.com/users/y-rokutan", "html_url": "https://github.com/y-rokutan", "followers_url": "https://api.github.com/users/y-rokutan/followers", "following_url": "https://api.github.com/users/y-rokutan/following{/other_user}", "gists_url": "https://api.github.com/users/y-rokutan/gists{/gist_id}", "starred_url": "https://api.github.com/users/y-rokutan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/y-rokutan/subscriptions", "organizations_url": "https://api.github.com/users/y-rokutan/orgs", "repos_url": "https://api.github.com/users/y-rokutan/repos", "events_url": "https://api.github.com/users/y-rokutan/events{/privacy}", "received_events_url": "https://api.github.com/users/y-rokutan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\na HF dataset can be converted to a Torch Dataset with a simple wrapper as follows:\r\n```python\r\nfrom torch.utils.data import Dataset\r\n \r\nclass HFDataset(Dataset):\r\n def __init__(self, dset):\r\n self.dset = dset\r\n\r\n def __getitem__(self, idx):\r\n return self.dset[idx]\r...
1,617,498,108,000
1,629,820,535,000
1,617,807,964,000
NONE
null
null
Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "labels"], ) engine, _, _, _ = deepspeed.initialize( args=args, model=model, model_parameters=[p for p in model.parameters() if p.requires_grad], training_data=train_ds) ``` but deepspeed.initialize accepts torch.utils.data.Dataset only. How can I convert HF-style dataset to torch-style dataset?
https://api.github.com/repos/huggingface/datasets/issues/2165/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2164
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2164/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2164/comments
https://api.github.com/repos/huggingface/datasets/issues/2164/events
https://github.com/huggingface/datasets/pull/2164
849,739,759
MDExOlB1bGxSZXF1ZXN0NjA4NDQ0MTE3
2,164
Replace assertTrue(isinstance with assertIsInstance in tests
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,484,022,000
1,617,720,069,000
1,617,720,068,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2164", "html_url": "https://github.com/huggingface/datasets/pull/2164", "diff_url": "https://github.com/huggingface/datasets/pull/2164.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2164.patch" }
Replaces all the occurrences of the `assertTrue(isinstance(` pattern with `assertIsInstance`.
https://api.github.com/repos/huggingface/datasets/issues/2164/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2163
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2163/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2163/comments
https://api.github.com/repos/huggingface/datasets/issues/2163/events
https://github.com/huggingface/datasets/pull/2163
849,669,366
MDExOlB1bGxSZXF1ZXN0NjA4Mzk0NDMz
2,163
Concat only unique fields in DatasetInfo.from_merge
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @mariosasko,\r\nJust came across this PR and I was wondering if we can use\r\n`description = \"\\n\\n\".join(OrderedDict.fromkeys([info.description for info in dataset_infos]))`\r\n\r\nThis will obviate the need for `unique` and is almost as fast as `set`. We could have used `dict` inplace of `OrderedDict` but ...
1,617,460,290,000
1,617,720,000,000
1,617,719,999,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2163", "html_url": "https://github.com/huggingface/datasets/pull/2163", "diff_url": "https://github.com/huggingface/datasets/pull/2163.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2163.patch" }
I thought someone from the community with less experience would be interested in fixing this issue, but that wasn't the case. Fixes #2103
https://api.github.com/repos/huggingface/datasets/issues/2163/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2162
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2162/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2162/comments
https://api.github.com/repos/huggingface/datasets/issues/2162/events
https://github.com/huggingface/datasets/issues/2162
849,129,201
MDU6SXNzdWU4NDkxMjkyMDE=
2,162
visualization for cc100 is broken
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
open
false
null
[]
null
[ "This looks like an issue with the cc100 dataset itself but not sure\r\nDid you try loading cc100 on your machine ?", "Hi\nloading works fine, but the viewer only is broken\nthanks\n\nOn Wed, Apr 7, 2021 at 12:17 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> This looks like an issue with the cc100 dataset itself bu...
1,617,358,273,000
1,617,800,467,000
null
NONE
null
null
Hi visualization through dataset viewer for cc100 is broken https://huggingface.co/datasets/viewer/ thanks a lot
https://api.github.com/repos/huggingface/datasets/issues/2162/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2161
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2161/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2161/comments
https://api.github.com/repos/huggingface/datasets/issues/2161/events
https://github.com/huggingface/datasets/issues/2161
849,127,041
MDU6SXNzdWU4NDkxMjcwNDE=
2,161
any possibility to download part of large datasets only?
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Not yet but it’s on the short/mid-term roadmap (requested by many indeed).", "oh, great, really awesome feature to have, thank you very much for the great, fabulous work", "We'll work on dataset streaming soon. This should allow you to only load the examples you need ;)", "thanks a lot Quentin, this would be...
1,617,358,006,000
1,625,239,169,000
null
NONE
null
null
Hi Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks
https://api.github.com/repos/huggingface/datasets/issues/2161/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2160
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2160/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2160/comments
https://api.github.com/repos/huggingface/datasets/issues/2160/events
https://github.com/huggingface/datasets/issues/2160
849,052,921
MDU6SXNzdWU4NDkwNTI5MjE=
2,160
data_args.preprocessing_num_workers almost freezes
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi.\r\nI cannot always reproduce this issue, and on later runs I did not see it so far. Sometimes also I set 8 processes but I see less being showed, is this normal, here only 5 are shown for 8 being set, thanks\r\n\r\n```\r\n#3: 11%|███████████████▊ ...
1,617,350,173,000
1,617,358,472,000
1,617,358,471,000
NONE
null
null
Hi @lhoestq I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves on till a point and then this freezes almost for sometime during tokenization steps and then this is back again, overall to me taking more time than normal case, I appreciate your advice on how I can use this option properly to speed up. thanks
https://api.github.com/repos/huggingface/datasets/issues/2160/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2159
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2159/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2159/comments
https://api.github.com/repos/huggingface/datasets/issues/2159/events
https://github.com/huggingface/datasets/issues/2159
848,851,962
MDU6SXNzdWU4NDg4NTE5NjI=
2,159
adding ccnet dataset
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "closing since I think this is cc100, just the name has been changed. thanks " ]
1,617,319,716,000
1,617,357,919,000
1,617,357,919,000
NONE
null
null
## Adding a Dataset - **Name:** ccnet - **Description:** Common Crawl - **Paper:** https://arxiv.org/abs/1911.00359 - **Data:** https://github.com/facebookresearch/cc_net - **Motivation:** this is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite important for cross-lingual reseach Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). thanks
https://api.github.com/repos/huggingface/datasets/issues/2159/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2158
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2158/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2158/comments
https://api.github.com/repos/huggingface/datasets/issues/2158/events
https://github.com/huggingface/datasets/issues/2158
848,506,746
MDU6SXNzdWU4NDg1MDY3NDY=
2,158
viewer "fake_news_english" error
{ "login": "emanuelevivoli", "id": 9447991, "node_id": "MDQ6VXNlcjk0NDc5OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/9447991?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emanuelevivoli", "html_url": "https://github.com/emanuelevivoli", "followers_url": "https://api.github.com/users/emanuelevivoli/followers", "following_url": "https://api.github.com/users/emanuelevivoli/following{/other_user}", "gists_url": "https://api.github.com/users/emanuelevivoli/gists{/gist_id}", "starred_url": "https://api.github.com/users/emanuelevivoli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emanuelevivoli/subscriptions", "organizations_url": "https://api.github.com/users/emanuelevivoli/orgs", "repos_url": "https://api.github.com/users/emanuelevivoli/repos", "events_url": "https://api.github.com/users/emanuelevivoli/events{/privacy}", "received_events_url": "https://api.github.com/users/emanuelevivoli/received_events", "type": "User", "site_admin": false }
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Thanks for reporting !\r\nThe viewer doesn't have all the dependencies of the datasets. We may add openpyxl to be able to show this dataset properly" ]
1,617,286,400,000
1,617,791,169,000
null
NONE
null
null
When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error: > ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional dependency for reading xlsx files' for instance' as well as the error Traceback.
https://api.github.com/repos/huggingface/datasets/issues/2158/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2157
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2157/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2157/comments
https://api.github.com/repos/huggingface/datasets/issues/2157/events
https://github.com/huggingface/datasets/pull/2157
847,205,239
MDExOlB1bGxSZXF1ZXN0NjA2MjM1NjUx
2,157
updated user permissions based on umask
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,219,509,000
1,617,693,559,000
1,617,693,559,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2157", "html_url": "https://github.com/huggingface/datasets/pull/2157", "diff_url": "https://github.com/huggingface/datasets/pull/2157.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2157.patch" }
Updated user permissions based on running user's umask (#2065). Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well)
https://api.github.com/repos/huggingface/datasets/issues/2157/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2156
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2156/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2156/comments
https://api.github.com/repos/huggingface/datasets/issues/2156/events
https://github.com/huggingface/datasets/pull/2156
847,198,295
MDExOlB1bGxSZXF1ZXN0NjA2MjI5MTky
2,156
User permissions
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,219,228,000
1,617,219,264,000
1,617,219,264,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2156", "html_url": "https://github.com/huggingface/datasets/pull/2156", "diff_url": "https://github.com/huggingface/datasets/pull/2156.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2156.patch" }
Updated user permissions based on running user's umask. Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well)
https://api.github.com/repos/huggingface/datasets/issues/2156/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2155
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2155/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2155/comments
https://api.github.com/repos/huggingface/datasets/issues/2155/events
https://github.com/huggingface/datasets/pull/2155
846,786,897
MDExOlB1bGxSZXF1ZXN0NjA1ODU3MTU4
2,155
Add table classes to the documentation
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Just note that docstrings injected from PyArrow do not follow the same convention for formatting types in `Args` or `Returns` as we do... Not a big problem, anyway! 😄 " ]
1,617,201,370,000
1,617,295,590,000
1,617,205,328,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2155", "html_url": "https://github.com/huggingface/datasets/pull/2155", "diff_url": "https://github.com/huggingface/datasets/pull/2155.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2155.patch" }
Following #2025 , I added the table classes to the documentation cc @albertvillanova
https://api.github.com/repos/huggingface/datasets/issues/2155/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2154
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2154/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2154/comments
https://api.github.com/repos/huggingface/datasets/issues/2154/events
https://github.com/huggingface/datasets/pull/2154
846,763,960
MDExOlB1bGxSZXF1ZXN0NjA1ODM2Mjc1
2,154
Adding the NorNE dataset for Norwegian POS and NER
{ "login": "versae", "id": 173537, "node_id": "MDQ6VXNlcjE3MzUzNw==", "avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4", "gravatar_id": "", "url": "https://api.github.com/users/versae", "html_url": "https://github.com/versae", "followers_url": "https://api.github.com/users/versae/followers", "following_url": "https://api.github.com/users/versae/following{/other_user}", "gists_url": "https://api.github.com/users/versae/gists{/gist_id}", "starred_url": "https://api.github.com/users/versae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/versae/subscriptions", "organizations_url": "https://api.github.com/users/versae/orgs", "repos_url": "https://api.github.com/users/versae/repos", "events_url": "https://api.github.com/users/versae/events{/privacy}", "received_events_url": "https://api.github.com/users/versae/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Awesome!" ]
1,617,200,570,000
1,617,269,220,000
1,617,268,568,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2154", "html_url": "https://github.com/huggingface/datasets/pull/2154", "diff_url": "https://github.com/huggingface/datasets/pull/2154.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2154.patch" }
NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names. See #1720.
https://api.github.com/repos/huggingface/datasets/issues/2154/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2153
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2153/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2153/comments
https://api.github.com/repos/huggingface/datasets/issues/2153/events
https://github.com/huggingface/datasets/issues/2153
846,181,502
MDU6SXNzdWU4NDYxODE1MDI=
2,153
load_dataset ignoring features
{ "login": "GuillemGSubies", "id": 37592763, "node_id": "MDQ6VXNlcjM3NTkyNzYz", "avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GuillemGSubies", "html_url": "https://github.com/GuillemGSubies", "followers_url": "https://api.github.com/users/GuillemGSubies/followers", "following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}", "gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}", "starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions", "organizations_url": "https://api.github.com/users/GuillemGSubies/orgs", "repos_url": "https://api.github.com/users/GuillemGSubies/repos", "events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}", "received_events_url": "https://api.github.com/users/GuillemGSubies/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Hi ! Thanks for reporting. I opened a PR to fix this issue: #2201", "Nice question which helped me a lot! I have wasted a lot of time to the `DatasetDict` creation from a csv file. Hope the document of this module add some simple examples.", "Hi :) We're indeed working on tutorials that we will add to the docs...
1,617,179,409,000
1,630,077,838,000
null
NONE
null
null
First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything. I'm using datasets 1.5.0 ![image](https://user-images.githubusercontent.com/37592763/113114369-8f376580-920b-11eb-900d-94365b59f04b.png) As you can see, when I load the dataset, the ClassLabels are ignored, I have to cast the dataset in order to make it work. Code to reproduce: ```python import datasets data_location = "/data/prueba_multiclase" features = datasets.Features( {"texto": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["false", "true"])} ) dataset = datasets.load_dataset( "csv", data_files=data_location, delimiter="\t", features=features ) ``` Dataset I used: [prueba_multiclase.zip](https://github.com/huggingface/datasets/files/6235022/prueba_multiclase.zip) (it has to be unzipped) Thank you! ❤️
https://api.github.com/repos/huggingface/datasets/issues/2153/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2152
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2152/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2152/comments
https://api.github.com/repos/huggingface/datasets/issues/2152/events
https://github.com/huggingface/datasets/pull/2152
845,751,273
MDExOlB1bGxSZXF1ZXN0NjA0ODk0MDkz
2,152
Update README.md
{ "login": "JieyuZhao", "id": 22306304, "node_id": "MDQ6VXNlcjIyMzA2MzA0", "avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JieyuZhao", "html_url": "https://github.com/JieyuZhao", "followers_url": "https://api.github.com/users/JieyuZhao/followers", "following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}", "gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}", "starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions", "organizations_url": "https://api.github.com/users/JieyuZhao/orgs", "repos_url": "https://api.github.com/users/JieyuZhao/repos", "events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}", "received_events_url": "https://api.github.com/users/JieyuZhao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,160,879,000
1,617,272,437,000
1,617,272,436,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2152", "html_url": "https://github.com/huggingface/datasets/pull/2152", "diff_url": "https://github.com/huggingface/datasets/pull/2152.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2152.patch" }
Updated some descriptions of Wino_Bias dataset.
https://api.github.com/repos/huggingface/datasets/issues/2152/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2151
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2151/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2151/comments
https://api.github.com/repos/huggingface/datasets/issues/2151/events
https://github.com/huggingface/datasets/pull/2151
844,886,081
MDExOlB1bGxSZXF1ZXN0NjA0MDg5MDMw
2,151
Add support for axis in concatenate datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/1", "html_url": "https://github.com/huggingface/datasets/milestone/1", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels", "id": 6644198, "node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==", "number": 1, "title": "1.6", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 4, "state": "closed", "created_at": 1617973671000, "updated_at": 1618937446000, "due_on": 1618556400000, "closed_at": 1618937446000 }
[ "@lhoestq I am going to implement the consolidation step you mentioned in #1870.", "@lhoestq I was thinking that the order of the TableBlocks is not relevant, isn't it?\r\n\r\nI mean, in order to consolidate _consecutive_ in-memory table blocks, in this case:\r\n```\r\nblocks = [in_memory_1, memory_mapped, in_mem...
1,617,123,524,000
1,624,470,062,000
1,618,848,438,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2151", "html_url": "https://github.com/huggingface/datasets/pull/2151", "diff_url": "https://github.com/huggingface/datasets/pull/2151.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2151.patch" }
Add support for `axis` (0 or 1) in `concatenate_datasets`. Close #853.
https://api.github.com/repos/huggingface/datasets/issues/2151/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2150
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2150/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2150/comments
https://api.github.com/repos/huggingface/datasets/issues/2150/events
https://github.com/huggingface/datasets/pull/2150
844,776,448
MDExOlB1bGxSZXF1ZXN0NjAzOTg3OTcx
2,150
Allow pickling of big in-memory tables
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,119,516,000
1,617,187,035,000
1,617,187,034,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2150", "html_url": "https://github.com/huggingface/datasets/pull/2150", "diff_url": "https://github.com/huggingface/datasets/pull/2150.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2150.patch" }
This should fix issue #2134 Pickling is limited to <4GiB objects, it's not possible to pickle a big arrow table (for multiprocessing for example). For big tables, we have to write them on disk and only pickle the path to the table.
https://api.github.com/repos/huggingface/datasets/issues/2150/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2149
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2149/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2149/comments
https://api.github.com/repos/huggingface/datasets/issues/2149/events
https://github.com/huggingface/datasets/issues/2149
844,734,076
MDU6SXNzdWU4NDQ3MzQwNzY=
2,149
Telugu subset missing for xtreme tatoeba dataset
{ "login": "jerryIsHere", "id": 50871412, "node_id": "MDQ6VXNlcjUwODcxNDEy", "avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jerryIsHere", "html_url": "https://github.com/jerryIsHere", "followers_url": "https://api.github.com/users/jerryIsHere/followers", "following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}", "gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}", "starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions", "organizations_url": "https://api.github.com/users/jerryIsHere/orgs", "repos_url": "https://api.github.com/users/jerryIsHere/repos", "events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}", "received_events_url": "https://api.github.com/users/jerryIsHere/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Good catch ! Thanks for reporting\r\n\r\nI just opened #2180 to fix this" ]
1,617,117,994,000
1,617,791,015,000
null
CONTRIBUTOR
null
null
from nlp import load_dataset train_dataset = load_dataset('xtreme', 'tatoeba.tel')['validation'] ValueError: BuilderConfig tatoeba.tel not found. but language tel is actually included in xtreme: https://github.com/google-research/xtreme/blob/master/utils_preprocess.py def tatoeba_preprocess(args): lang3_dict = { 'afr':'af', 'ara':'ar', 'bul':'bg', 'ben':'bn', 'deu':'de', 'ell':'el', 'spa':'es', 'est':'et', 'eus':'eu', 'pes':'fa', 'fin':'fi', 'fra':'fr', 'heb':'he', 'hin':'hi', 'hun':'hu', 'ind':'id', 'ita':'it', 'jpn':'ja', 'jav':'jv', 'kat':'ka', 'kaz':'kk', 'kor':'ko', 'mal':'ml', 'mar':'mr', 'nld':'nl', 'por':'pt', 'rus':'ru', 'swh':'sw', 'tam':'ta', **_'tel':'te'_**, 'tha':'th', 'tgl':'tl', <----here 'tur':'tr', 'urd':'ur', 'vie':'vi', 'cmn':'zh', 'eng':'en', }
https://api.github.com/repos/huggingface/datasets/issues/2149/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2148
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2148/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2148/comments
https://api.github.com/repos/huggingface/datasets/issues/2148/events
https://github.com/huggingface/datasets/issues/2148
844,700,910
MDU6SXNzdWU4NDQ3MDA5MTA=
2,148
Add configurable options to `seqeval` metric
{ "login": "marrodion", "id": 44571847, "node_id": "MDQ6VXNlcjQ0NTcxODQ3", "avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marrodion", "html_url": "https://github.com/marrodion", "followers_url": "https://api.github.com/users/marrodion/followers", "following_url": "https://api.github.com/users/marrodion/following{/other_user}", "gists_url": "https://api.github.com/users/marrodion/gists{/gist_id}", "starred_url": "https://api.github.com/users/marrodion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marrodion/subscriptions", "organizations_url": "https://api.github.com/users/marrodion/orgs", "repos_url": "https://api.github.com/users/marrodion/repos", "events_url": "https://api.github.com/users/marrodion/events{/privacy}", "received_events_url": "https://api.github.com/users/marrodion/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @marrodion. \r\n\r\nThanks for pointing this out. It would be great to incorporate this metric-specific enhancement.\r\n\r\nAnother possibility would be to require the user to input the scheme as a string `mode=\"strict\", scheme=\"IOB2\"` and then dynamically import the corresponding module using Python `impor...
1,617,116,646,000
1,618,494,586,000
1,618,494,586,000
CONTRIBUTOR
null
null
Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation). However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs in `Seqeval._compute` https://github.com/huggingface/datasets/blob/85cf7ff920c90ca2e12bedca12b36d2a043c3da2/metrics/seqeval/seqeval.py#L109 Things that would be relevant are, for example, supporting `mode="strict", scheme=IOB2` to count only full entity match as a true positive and omit partial matches. The only problem I see is that the spirit of `metrics` seems to not require additional imports from user. `seqeval` only supports schemes as objects, without any string aliases. It can be solved naively with mapping like `{"IOB2": seqeval.scheme.IOB2}`. Or just left as is and require user to explicitly import scheme from `seqeval` if he wants to configure it past the default implementation. If that makes sense, I am happy to implement the change.
https://api.github.com/repos/huggingface/datasets/issues/2148/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2147
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2147/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2147/comments
https://api.github.com/repos/huggingface/datasets/issues/2147/events
https://github.com/huggingface/datasets/pull/2147
844,687,831
MDExOlB1bGxSZXF1ZXN0NjAzOTA3NjM4
2,147
Render docstring return type as inline
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[]
1,617,116,143,000
1,617,196,265,000
1,617,196,265,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2147", "html_url": "https://github.com/huggingface/datasets/pull/2147", "diff_url": "https://github.com/huggingface/datasets/pull/2147.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2147.patch" }
This documentation setting will avoid having the return type in a separate line under `Return type`. See e.g. current docs for `Dataset.to_csv`.
https://api.github.com/repos/huggingface/datasets/issues/2147/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2146
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2146/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2146/comments
https://api.github.com/repos/huggingface/datasets/issues/2146/events
https://github.com/huggingface/datasets/issues/2146
844,673,244
MDU6SXNzdWU4NDQ2NzMyNDQ=
2,146
Dataset file size on disk is very large with 3D Array
{ "login": "jblemoine", "id": 22685854, "node_id": "MDQ6VXNlcjIyNjg1ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/22685854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jblemoine", "html_url": "https://github.com/jblemoine", "followers_url": "https://api.github.com/users/jblemoine/followers", "following_url": "https://api.github.com/users/jblemoine/following{/other_user}", "gists_url": "https://api.github.com/users/jblemoine/gists{/gist_id}", "starred_url": "https://api.github.com/users/jblemoine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jblemoine/subscriptions", "organizations_url": "https://api.github.com/users/jblemoine/orgs", "repos_url": "https://api.github.com/users/jblemoine/repos", "events_url": "https://api.github.com/users/jblemoine/events{/privacy}", "received_events_url": "https://api.github.com/users/jblemoine/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! In the arrow file we store all the integers as uint8.\r\nSo your arrow file should weigh around `height x width x n_channels x n_images` bytes.\r\n\r\nWhat feature type do your TFDS dataset have ?\r\n\r\nIf it uses a `tfds.features.Image` type, then what is stored is the encoded data (as png or jpg for exampl...
1,617,115,569,000
1,618,578,422,000
null
NONE
null
null
Hi, I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`. `{ "description": "", "citation": "", "homepage": "", "license": "", "features": { "image": { "shape": [224, 224, 3], "dtype": "uint8", "id": null, "_type": "Array3D", } }, "post_processed": null, "supervised_keys": null, "builder_name": "shot_type_image_dataset", "config_name": "default", "version": { "version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0, }, "splits": { "train": { "name": "train", "num_bytes": 520803408, "num_examples": 1479, "dataset_name": "shot_type_image_dataset", } }, "download_checksums": { "": { "num_bytes": 16940447118, "checksum": "5854035705efe08b0ed8f3cf3da7b4d29cba9055c2d2d702c79785350d72ee03", } }, "download_size": 16940447118, "post_processing_size": null, "dataset_size": 520803408, "size_in_bytes": 17461250526, }` I have created the same dataset with tensorflow_dataset and it takes only 125MB on disk. I am wondering, is it normal behavior ? I understand `Datasets` uses Arrow for serialization wheres tf uses TF Records. This might be a problem for large dataset. Thanks for your help.
https://api.github.com/repos/huggingface/datasets/issues/2146/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2145
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2145/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2145/comments
https://api.github.com/repos/huggingface/datasets/issues/2145/events
https://github.com/huggingface/datasets/pull/2145
844,603,518
MDExOlB1bGxSZXF1ZXN0NjAzODMxOTE2
2,145
Implement Dataset add_column
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/3", "html_url": "https://github.com/huggingface/datasets/milestone/3", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels", "id": 6644287, "node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==", "number": 3, "title": "1.7", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 3, "state": "closed", "created_at": 1617974191000, "updated_at": 1622478053000, "due_on": 1620975600000, "closed_at": 1622478053000 }
[ "#2274 has been merged. You can now merge master into this branch and use `assert_arrow_metadata_are_synced_with_dataset_features(dset)` to make sure that the metadata are good :)" ]
1,617,112,934,000
1,619,707,844,000
1,619,707,843,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2145", "html_url": "https://github.com/huggingface/datasets/pull/2145", "diff_url": "https://github.com/huggingface/datasets/pull/2145.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2145.patch" }
Implement `Dataset.add_column`. Close #1954.
https://api.github.com/repos/huggingface/datasets/issues/2145/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2144
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2144/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2144/comments
https://api.github.com/repos/huggingface/datasets/issues/2144/events
https://github.com/huggingface/datasets/issues/2144
844,352,067
MDU6SXNzdWU4NDQzNTIwNjc=
2,144
Loading wikipedia 20200501.en throws pyarrow related error
{ "login": "TomPyonsuke", "id": 26637405, "node_id": "MDQ6VXNlcjI2NjM3NDA1", "avatar_url": "https://avatars.githubusercontent.com/u/26637405?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TomPyonsuke", "html_url": "https://github.com/TomPyonsuke", "followers_url": "https://api.github.com/users/TomPyonsuke/followers", "following_url": "https://api.github.com/users/TomPyonsuke/following{/other_user}", "gists_url": "https://api.github.com/users/TomPyonsuke/gists{/gist_id}", "starred_url": "https://api.github.com/users/TomPyonsuke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TomPyonsuke/subscriptions", "organizations_url": "https://api.github.com/users/TomPyonsuke/orgs", "repos_url": "https://api.github.com/users/TomPyonsuke/repos", "events_url": "https://api.github.com/users/TomPyonsuke/events{/privacy}", "received_events_url": "https://api.github.com/users/TomPyonsuke/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "That's how I loaded the dataset\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache')\r\n```", "Hi ! It looks like the arrow file in the folder\r\n`/usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa...
1,617,100,711,000
1,617,268,877,000
null
NONE
null
null
**Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931... Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14.6k/14.6k [00:00<00:00, 5.41MB/s] Downloading: 59%|███████████████████████████████████████████████████████████████████████████████████████▊ | 10.7G/18.3G [11:30<08:08, 15.5MB/s] Dataset wikipedia downloaded and prepared to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931. Subsequent calls will reuse this data. Traceback (most recent call last): File "load_wiki.py", line 2, in <module> ds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache') File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 751, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 746, in as_dataset map_tuple=True, File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 763, in _build_single_dataset in_memory=in_memory, File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 835, in _as_dataset in_memory=in_memory, File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 215, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 236, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 171, in _read_files pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename pa_table = ArrowReader.read_table(filename, in_memory=in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 324, in read_table pa_table = f.read_all() File "pyarrow/ipc.pxi", line 544, in pyarrow.lib.RecordBatchReader.read_all File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status OSError: Expected to be able to read 9176784 bytes for message body, got 4918712 **Detailed version info** datasets==1.5.0 - dataclasses [required: Any, installed: 0.8] - dill [required: Any, installed: 0.3.3] - fsspec [required: Any, installed: 0.8.7] - importlib-metadata [required: Any, installed: 1.7.0] - zipp [required: >=0.5, installed: 3.1.0] - huggingface-hub [required: <0.1.0, installed: 0.0.7] - filelock [required: Any, installed: 3.0.12] - importlib-metadata [required: Any, installed: 1.7.0] - zipp [required: >=0.5, installed: 3.1.0] - requests [required: Any, installed: 2.24.0] - certifi [required: >=2017.4.17, installed: 2020.6.20] - chardet [required: >=3.0.2,<4, installed: 3.0.4] - idna [required: >=2.5,<3, installed: 2.6] - urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10] - tqdm [required: Any, installed: 4.49.0] - importlib-metadata [required: Any, installed: 1.7.0] - zipp [required: >=0.5, installed: 3.1.0] - multiprocess [required: Any, installed: 0.70.11.1] - dill [required: >=0.3.3, installed: 0.3.3] - numpy [required: >=1.17, installed: 1.17.0] - pandas [required: Any, installed: 1.1.5] - numpy [required: >=1.15.4, installed: 1.17.0] - python-dateutil [required: >=2.7.3, installed: 2.8.0] - six [required: >=1.5, installed: 1.15.0] - pytz [required: >=2017.2, installed: 2020.1] - pyarrow [required: >=0.17.1, installed: 3.0.0] - numpy [required: >=1.16.6, installed: 1.17.0] - requests [required: >=2.19.0, installed: 2.24.0] - certifi [required: >=2017.4.17, installed: 2020.6.20] - chardet [required: >=3.0.2,<4, installed: 3.0.4] - idna [required: >=2.5,<3, installed: 2.6] - urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10] - tqdm [required: >=4.27,<4.50.0, installed: 4.49.0] - xxhash [required: Any, installed: 2.0.0]
https://api.github.com/repos/huggingface/datasets/issues/2144/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2143
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2143/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2143/comments
https://api.github.com/repos/huggingface/datasets/issues/2143/events
https://github.com/huggingface/datasets/pull/2143
844,313,228
MDExOlB1bGxSZXF1ZXN0NjAzNTc0NjI0
2,143
task casting via load_dataset
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github...
null
[]
1,617,098,442,000
1,623,417,641,000
1,623,417,636,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2143", "html_url": "https://github.com/huggingface/datasets/pull/2143", "diff_url": "https://github.com/huggingface/datasets/pull/2143.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2143.patch" }
wip not satisfied with the API, it means as a dataset implementer I need to write a function with boilerplate and write classes for each `<dataset><task>` "facet".
https://api.github.com/repos/huggingface/datasets/issues/2143/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2142
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2142/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2142/comments
https://api.github.com/repos/huggingface/datasets/issues/2142/events
https://github.com/huggingface/datasets/pull/2142
843,919,420
MDExOlB1bGxSZXF1ZXN0NjAzMjQwMzUy
2,142
Gem V1.1
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,061,622,000
1,617,063,002,000
1,617,063,002,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2142", "html_url": "https://github.com/huggingface/datasets/pull/2142", "diff_url": "https://github.com/huggingface/datasets/pull/2142.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2142.patch" }
This branch updates the GEM benchmark to its 1.1 version which includes: - challenge sets for most tasks - detokenized TurkCorpus to match the rest of the text simplification subtasks - fixed inputs for TurkCorpus and ASSET test sets - 18 languages in WikiLingua cc @sebastianGehrmann
https://api.github.com/repos/huggingface/datasets/issues/2142/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2141
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2141/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2141/comments
https://api.github.com/repos/huggingface/datasets/issues/2141/events
https://github.com/huggingface/datasets/pull/2141
843,914,790
MDExOlB1bGxSZXF1ZXN0NjAzMjM2MjUw
2,141
added spans field for the wikiann datasets
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq \r\nThanks a lot for taking time checking it. I update \"dataset_infos.json\", I added description to the function of _generate_samples in wikiann.py but I was not sure about the format to write in README. thanks. ", "Thanks !\r\n\r\nFor the fields description in the dataset card, something like thi...
1,617,061,106,000
1,617,197,270,000
1,617,197,270,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2141", "html_url": "https://github.com/huggingface/datasets/pull/2141", "diff_url": "https://github.com/huggingface/datasets/pull/2141.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2141.patch" }
Hi @lhoestq I tried to add spans to the wikiann datasets. Thanks a lot for kindly having a look. This addresses https://github.com/huggingface/datasets/issues/2130. Best regards Rabeeh
https://api.github.com/repos/huggingface/datasets/issues/2141/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2140
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2140/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2140/comments
https://api.github.com/repos/huggingface/datasets/issues/2140/events
https://github.com/huggingface/datasets/pull/2140
843,830,451
MDExOlB1bGxSZXF1ZXN0NjAzMTYxMjYx
2,140
add banking77 dataset
{ "login": "dkajtoch", "id": 32985207, "node_id": "MDQ6VXNlcjMyOTg1MjA3", "avatar_url": "https://avatars.githubusercontent.com/u/32985207?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dkajtoch", "html_url": "https://github.com/dkajtoch", "followers_url": "https://api.github.com/users/dkajtoch/followers", "following_url": "https://api.github.com/users/dkajtoch/following{/other_user}", "gists_url": "https://api.github.com/users/dkajtoch/gists{/gist_id}", "starred_url": "https://api.github.com/users/dkajtoch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dkajtoch/subscriptions", "organizations_url": "https://api.github.com/users/dkajtoch/orgs", "repos_url": "https://api.github.com/users/dkajtoch/repos", "events_url": "https://api.github.com/users/dkajtoch/events{/privacy}", "received_events_url": "https://api.github.com/users/dkajtoch/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq I updated files" ]
1,617,053,543,000
1,617,960,738,000
1,617,960,738,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2140", "html_url": "https://github.com/huggingface/datasets/pull/2140", "diff_url": "https://github.com/huggingface/datasets/pull/2140.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2140.patch" }
Intent classification/detection dataset from banking category with 77 unique intents.
https://api.github.com/repos/huggingface/datasets/issues/2140/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2139
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2139/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2139/comments
https://api.github.com/repos/huggingface/datasets/issues/2139/events
https://github.com/huggingface/datasets/issues/2139
843,662,613
MDU6SXNzdWU4NDM2NjI2MTM=
2,139
TypeError when using save_to_disk in a dataset loaded with ReadInstruction split
{ "login": "PedroMLF", "id": 22480495, "node_id": "MDQ6VXNlcjIyNDgwNDk1", "avatar_url": "https://avatars.githubusercontent.com/u/22480495?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PedroMLF", "html_url": "https://github.com/PedroMLF", "followers_url": "https://api.github.com/users/PedroMLF/followers", "following_url": "https://api.github.com/users/PedroMLF/following{/other_user}", "gists_url": "https://api.github.com/users/PedroMLF/gists{/gist_id}", "starred_url": "https://api.github.com/users/PedroMLF/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PedroMLF/subscriptions", "organizations_url": "https://api.github.com/users/PedroMLF/orgs", "repos_url": "https://api.github.com/users/PedroMLF/repos", "events_url": "https://api.github.com/users/PedroMLF/events{/privacy}", "received_events_url": "https://api.github.com/users/PedroMLF/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi !\r\nI think this has been fixed recently on `master`.\r\nCan you try again by installing `datasets` from `master` ?\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```", "Hi!\r\n\r\nUsing that version of the code solves the issue. Thanks!" ]
1,617,042,234,000
1,617,095,573,000
1,617,095,573,000
NONE
null
null
Hi, Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`. Here is the minimal reproducible example: ```python from datasets import load_dataset from datasets import ReadInstruction data_1 = load_dataset( "wikiann", "en", split="validation", ) data_1.save_to_disk("temporary_path_1") print("Save with regular split works.") data_2 = load_dataset( "wikiann", "en", split=ReadInstruction("validation", to=50, unit="%"), ) data_2.save_to_disk("temporary_path_2") ``` and the corresponding output: ``` Reusing dataset wikiann (/xxxxx/.cache/huggingface/datasets/wikiann/en/1.1.0/0b11a6fb31eea02f38ca17610657bfba3206100685283014daceb8da291c3be9) Save with regular split works. Reusing dataset wikiann (/xxxxx/.cache/huggingface/datasets/wikiann/en/1.1.0/0b11a6fb31eea02f38ca17610657bfba3206100685283014daceb8da291c3be9) Traceback (most recent call last): File "bug.py", line 20, in <module> data_2.save_to_disk("temporary_path_2") File "/xxxxx/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 645, in save_to_disk json.dump(state, state_file, indent=2, sort_keys=True) File "/usr/lib/python3.7/json/__init__.py", line 179, in dump for chunk in iterable: File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict yield from chunks File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode o = _default(o) File "/usr/lib/python3.7/json/encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type ReadInstruction is not JSON serializable ``` Let me know if there is some misuse from my end. Thanks in advance.
https://api.github.com/repos/huggingface/datasets/issues/2139/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/2138
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2138/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2138/comments
https://api.github.com/repos/huggingface/datasets/issues/2138/events
https://github.com/huggingface/datasets/pull/2138
843,508,402
MDExOlB1bGxSZXF1ZXN0NjAyODc4NzU2
2,138
Add CER metric
{ "login": "chutaklee", "id": 6931004, "node_id": "MDQ6VXNlcjY5MzEwMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/6931004?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chutaklee", "html_url": "https://github.com/chutaklee", "followers_url": "https://api.github.com/users/chutaklee/followers", "following_url": "https://api.github.com/users/chutaklee/following{/other_user}", "gists_url": "https://api.github.com/users/chutaklee/gists{/gist_id}", "starred_url": "https://api.github.com/users/chutaklee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chutaklee/subscriptions", "organizations_url": "https://api.github.com/users/chutaklee/orgs", "repos_url": "https://api.github.com/users/chutaklee/repos", "events_url": "https://api.github.com/users/chutaklee/events{/privacy}", "received_events_url": "https://api.github.com/users/chutaklee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,033,147,000
1,617,725,771,000
1,617,693,278,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2138", "html_url": "https://github.com/huggingface/datasets/pull/2138", "diff_url": "https://github.com/huggingface/datasets/pull/2138.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2138.patch" }
Add Character Error Rate (CER) metric that is used in evaluation in ASR. I also have written unittests (hopefully thorough enough) but I'm not sure how to integrate them into the existed codebase. ```python from cer import CER cer = CER() class TestCER(unittest.TestCase): def test_cer_case_senstive(self): refs = ['White House'] preds = ['white house'] # S = 2, D = 0, I = 0, N = 11, CER = 2 / 11 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.1818181818) < 1e-6) def test_cer_whitespace(self): refs = ['were wolf'] preds = ['werewolf'] # S = 0, D = 0, I = 1, N = 9, CER = 1 / 9 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.1111111) < 1e-6) refs = ['werewolf'] preds = ['weae wolf'] # S = 1, D = 1, I = 0, N = 8, CER = 0.25 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.25) < 1e-6) # consecutive whitespaces case 1 refs = ['were wolf'] preds = ['were wolf'] # S = 0, D = 0, I = 0, N = 9, CER = 0 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.0) < 1e-6) # consecutive whitespaces case 2 refs = ['were wolf'] preds = ['were wolf'] # S = 0, D = 0, I = 0, N = 9, CER = 0 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.0) < 1e-6) def test_cer_sub(self): refs = ['werewolf'] preds = ['weaewolf'] # S = 1, D = 0, I = 0, N = 8, CER = 0.125 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.125) < 1e-6) def test_cer_del(self): refs = ['werewolf'] preds = ['wereawolf'] # S = 0, D = 1, I = 0, N = 8, CER = 0.125 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.125) < 1e-6) def test_cer_insert(self): refs = ['werewolf'] preds = ['wereolf'] # S = 0, D = 0, I = 1, N = 8, CER = 0.125 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.125) < 1e-6) def test_cer_equal(self): refs = ['werewolf'] char_error_rate = cer.compute(predictions=refs, references=refs) self.assertEqual(char_error_rate, 0.0) def test_cer_list_of_seqs(self): refs = ['werewolf', 'I am your father'] char_error_rate = cer.compute(predictions=refs, references=refs) self.assertEqual(char_error_rate, 0.0) refs = ['werewolf', 'I am your father', 'doge'] preds = ['werxwolf', 'I am your father', 'doge'] # S = 1, D = 0, I = 0, N = 28, CER = 1 / 28 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.03571428) < 1e-6) def test_cer_unicode(self): ref = [u'我能吞下玻璃而不伤身体'] pred = [u' 能吞虾玻璃而 不霜身体啦'] # S = 3, D = 2, I = 0, N = 11 # CER = 5 / 11 char_error_rate = cer.compute(predictions=pred, references=ref) self.assertTrue(abs(char_error_rate - 0.4545454545) < 1e-6) ref = [u'我能吞', u'下玻璃而不伤身体'] pred = [u'我 能 吞 下 玻 璃', u'而不伤身体'] # S = 0, D = 5, I = 0, N = 11 # CER = 5 / 11 char_error_rate = cer.compute(predictions=pred, references=ref) self.assertTrue(abs(char_error_rate - 0.454545454545) < 1e-6) ref = [u'我能吞下玻璃而不伤身体'] char_error_rate = cer.compute(predictions=ref, references=ref) self.assertFalse(char_error_rate, 0.0) def test_cer_empty(self): ref = '' pred = 'Hypothesis' with self.assertRaises(ValueError): char_error_rate = cer.compute(predictions=pred, references=ref) if __name__ == '__main__': unittest.main() ```
https://api.github.com/repos/huggingface/datasets/issues/2138/timeline
null
true
https://api.github.com/repos/huggingface/datasets/issues/2137
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2137/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2137/comments
https://api.github.com/repos/huggingface/datasets/issues/2137/events
https://github.com/huggingface/datasets/pull/2137
843,502,835
MDExOlB1bGxSZXF1ZXN0NjAyODc0MDYw
2,137
Fix missing infos from concurrent dataset loading
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,617,032,772,000
1,617,186,956,000
1,617,186,955,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2137", "html_url": "https://github.com/huggingface/datasets/pull/2137", "diff_url": "https://github.com/huggingface/datasets/pull/2137.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2137.patch" }
This should fix issue #2131 When calling `load_dataset` at the same time from 2 workers, one of the worker could have missing split infos when reloading the dataset from the cache.
https://api.github.com/repos/huggingface/datasets/issues/2137/timeline
null
true