Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    RuntimeError
Message:      Dataset scripts are no longer supported, but found omim.py
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 989, in dataset_module_factory
                  raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
              RuntimeError: Dataset scripts are no longer supported, but found omim.py

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: The task_categories "sequence-modeling" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
YAML Metadata Warning: The task_ids "sequence-classification" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, language-identification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-modeling, dialogue-generation, conversational, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, text2text-generation, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, image-inpainting, image-colorization, super-resolution, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering, pose-estimation

vep_clinvar_chr1_split

  • 字段: ref, alt, label, chromosome, position
  • 划分: chromosome=1为test,其余为train
  • 支持自动生成ref/alt序列

用法

from datasets import load_dataset

ds = load_dataset(
    "Bgoood/vep_mendelian_traits_chr11_split",
    sequence_length=2048,
    fasta_path="/path/to/hg38.fa.gz",
    data_dir="."
)

---

## 5. 上传到 HuggingFace

1. **初始化git repo(如果还没有)**
   ```bash
   git lfs install
   git clone https://huggingface.co/datasets/Bgoood/vep_mendelian_traits_chr11_split
   cd vep_mendelian_traits_chr11_split
   # 把 train.csv, test.csv, vep_mendelian_traits_chr11_split.py, README.md 放到这个目录
   git add .
   git commit -m "init dataset with script"
   git push
  1. 或者直接网页上传
    在你的数据集页面,点击“Add file”,上传上述文件。

6. 用户使用方式

用户只需这样调用即可自动生成ref/alt序列:

from datasets import load_dataset

ds = load_dataset(
    "Bgoood/vep_mendelian_traits_chr11_split",
    sequence_length=2048,
    fasta_path="/path/to/hg38.fa.gz",
    data_dir="."
)

7. 依赖

确保用户环境已安装:

pip install datasets pyfaidx pandas

8. 注意事项

  • fasta_path 必须是本地可访问的 hg38.fa.gz 路径。
  • 你上传到HF的数据集只需包含原始csv和脚本,不需要包含fasta文件。

如需自动化脚本生成csv、或有其他定制需求,请随时告知!

Downloads last month
59