Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      Failed to parse string: 'X' as a scalar of type int64
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 644, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2223, in cast_table_to_schema
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2224, in <listcomp>
                  cast_array_to_feature(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2086, in cast_array_to_feature
                  return array_cast(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1949, in array_cast
                  return array.cast(pa_type)
                File "pyarrow/array.pxi", line 996, in pyarrow.lib.Array.cast
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/compute.py", line 404, in cast
                  return call_function("cast", [arr], options, memory_pool)
                File "pyarrow/_compute.pyx", line 590, in pyarrow._compute.call_function
                File "pyarrow/_compute.pyx", line 385, in pyarrow._compute.Function.call
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Failed to parse string: 'X' as a scalar of type int64
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1456, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1055, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 894, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 970, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

chromosome
int64
position
int64
ref
string
alt
string
label
int64
consequence
string
1
13,273
G
C
0
ncRNA
1
14,464
A
T
0
ncRNA
1
16,688
G
A
0
ncRNA
1
17,697
G
C
0
ncRNA
1
49,554
A
G
0
Enhancer
1
51,479
T
A
0
Promoter
1
51,803
T
C
0
Promoter
1
51,928
G
A
0
Promoter
1
52,058
G
C
0
Promoter
1
52,238
T
G
0
Promoter
1
54,366
A
G
0
Enhancer
1
54,380
T
C
0
Enhancer
1
54,421
A
G
0
Enhancer
1
54,490
G
A
0
Enhancer
1
54,586
T
C
0
Enhancer
1
54,676
C
T
0
Enhancer
1
54,844
G
A
0
Enhancer
1
55,164
C
A
0
Enhancer
1
55,545
C
T
0
Enhancer
1
55,926
T
C
0
Enhancer
1
58,771
T
C
0
ncRNA
1
63,268
T
C
0
ncRNA
1
63,516
A
G
0
ncRNA
1
63,527
T
C
0
ncRNA
1
63,671
G
A
0
ncRNA
1
63,697
T
C
0
ncRNA
1
64,025
T
C
0
ncRNA
1
64,764
C
T
0
Promoter
1
64,931
G
A
0
Promoter
1
73,649
C
G
0
Enhancer
1
75,767
G
A
0
Enhancer
1
76,057
A
G
0
Enhancer
1
76,206
T
C
0
Enhancer
1
76,240
A
G
0
Enhancer
1
76,838
T
G
0
Enhancer
1
76,854
A
G
0
Enhancer
1
77,089
C
T
0
Enhancer
1
77,866
C
T
0
Enhancer
1
77,961
G
A
0
Enhancer
1
79,772
C
G
0
Enhancer
1
80,323
G
C
0
Enhancer
1
80,619
G
A
0
Enhancer
1
80,857
C
T
0
Enhancer
1
81,260
C
T
0
Enhancer
1
81,343
A
G
0
Enhancer
1
82,163
G
A
0
Enhancer
1
82,400
G
A
0
Enhancer
1
82,562
T
C
0
Enhancer
1
82,609
C
G
0
Enhancer
1
82,676
T
G
0
Enhancer
1
82,734
T
C
0
Enhancer
1
83,084
T
A
0
Enhancer
1
83,443
C
T
0
Enhancer
1
83,795
G
A
0
Enhancer
1
84,002
G
A
0
Enhancer
1
84,244
A
C
0
Enhancer
1
84,307
C
A
0
Enhancer
1
84,683
A
G
0
Enhancer
1
85,529
G
A
0
Enhancer
1
85,597
A
C
0
Enhancer
1
86,018
C
G
0
Enhancer
1
86,065
G
C
0
Enhancer
1
86,303
G
T
0
Enhancer
1
86,331
A
G
0
Enhancer
1
87,190
G
A
0
Enhancer
1
88,136
G
A
0
Enhancer
1
88,169
C
T
0
Enhancer
1
88,172
G
A
0
Enhancer
1
88,177
G
C
0
Enhancer
1
89,599
A
T
0
ncRNA
1
89,946
A
T
0
ncRNA
1
90,007
G
A
0
ncRNA
1
91,190
G
A
0
ncRNA
1
91,421
T
C
0
ncRNA
1
91,605
C
T
0
ncRNA
1
131,310
G
C
0
ncRNA
1
133,160
G
A
0
ncRNA
1
133,483
G
T
0
ncRNA
1
136,437
T
C
0
Promoter
1
136,635
T
G
0
Promoter
1
136,817
T
C
0
Promoter
1
137,825
G
A
0
ncRNA
1
138,593
G
T
0
Promoter
1
181,321
C
G
0
Enhancer
1
181,583
C
G
0
Enhancer
1
181,706
G
A
0
Promoter
1
183,800
C
G
0
ncRNA
1
185,497
G
A
0
ncRNA
1
187,153
T
C
0
ncRNA
1
187,259
G
T
0
ncRNA
1
188,252
G
T
0
ncRNA
1
202,330
C
A
0
Enhancer
1
206,074
T
C
0
Enhancer
1
206,849
C
G
0
Enhancer
1
206,863
T
C
0
Enhancer
1
206,930
G
A
0
Enhancer
1
263,722
C
G
0
ncRNA
1
264,481
G
A
0
ncRNA
1
264,557
A
G
0
ncRNA
1
264,562
C
T
0
ncRNA
End of preview.
YAML Metadata Warning: The task_categories "sequence-modeling" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
YAML Metadata Warning: The task_ids "sequence-classification" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, language-identification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-modeling, dialogue-generation, conversational, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, text2text-generation, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, image-inpainting, image-colorization, super-resolution, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering, pose-estimation

vep_clinvar_chr1_split

  • 字段: ref, alt, label, chromosome, position
  • 划分: chromosome=1为test,其余为train
  • 支持自动生成ref/alt序列

用法

from datasets import load_dataset

ds = load_dataset(
    "Bgoood/vep_mendelian_traits_chr11_split",
    sequence_length=2048,
    fasta_path="/path/to/hg38.fa.gz",
    data_dir="."
)

---

## 5. 上传到 HuggingFace

1. **初始化git repo(如果还没有)**
   ```bash
   git lfs install
   git clone https://huggingface.co/datasets/Bgoood/vep_mendelian_traits_chr11_split
   cd vep_mendelian_traits_chr11_split
   # 把 train.csv, test.csv, vep_mendelian_traits_chr11_split.py, README.md 放到这个目录
   git add .
   git commit -m "init dataset with script"
   git push
  1. 或者直接网页上传
    在你的数据集页面,点击“Add file”,上传上述文件。

6. 用户使用方式

用户只需这样调用即可自动生成ref/alt序列:

from datasets import load_dataset

ds = load_dataset(
    "Bgoood/vep_mendelian_traits_chr11_split",
    sequence_length=2048,
    fasta_path="/path/to/hg38.fa.gz",
    data_dir="."
)

7. 依赖

确保用户环境已安装:

pip install datasets pyfaidx pandas

8. 注意事项

  • fasta_path 必须是本地可访问的 hg38.fa.gz 路径。
  • 你上传到HF的数据集只需包含原始csv和脚本,不需要包含fasta文件。

如需自动化脚本生成csv、或有其他定制需求,请随时告知!

Downloads last month
70