Datasets:
dataset_info:
features:
- name: repo_owner
dtype: string
- name: repo_name
dtype: string
- name: file_path
dtype: string
- name: file_url
dtype: string
splits:
- name: train
num_bytes: 24941794
num_examples: 186066
download_size: 8323253
dataset_size: 24941794
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
annotations_creators:
- author
license:
- gpl-3.0
multilinguality:
- monolingual
pretty_name: GitHub-Python
dataset_name: github-python
dataset_type: code
tags:
- code
- python
size_categories:
- 100K<nβ©½1M
task_categories:
- text-generation
GitHub-Python β Licensed & Elaborated Variants
This repository ships two complementary Python-code corpora extracted from public GitHub:
- Licensed Subset β strictly permissive-licensed files suitable for commercial redistribution / model training (main corpus used in our experiments).
- Elaborated Collection β a broader crawl that additionally contains files under copyleft or unclear licenses (GPL/AGPL/LGPL, etc.). Useful for analysis or pre-training where license mixing is acceptable.
Both variants target code-completion / generation research.
Dataset at a glance
| Licensed Subset | Elaborated Collection | |
|---|---|---|
| Files (.py) | 53,017 | 186,066 |
| Unique repositories | 16,447 | 59,852 |
| Repository owners | 12,515 | 43,517 |
| Compressed size | 732 MB | 2.4 GB * |
| Vocabulary (tokens) | 443,431 | 443,431 β |
| License coverage | Permissive only | Mixed (perm. + copyleft) |
| Secrets redacted | β | β οΈ not guaranteed |
| Time window | β₯ 2015-01-01 | β₯ 2015-01-01 |
* estimated β elaborated corpus is distributed as raw file list, not a single
text file.
β same tokenizer file is shared by both variants.
Numbers were obtained from the final redacted corpus and companion metadata.
Dataset structure
huggingface_dataset/
ββ mega_licensed_corpus_redacted.txt # Licensed Subset β concatenated code
ββ python_files.txt # Licensed Subset β raw file URLs
ββ python_files_elaborated.txt # Elaborated Collection β raw file URLs
ββ python_files_elaborated_metadata.csv # Elaborated Collection metadata
ββ custom_tokens_vocab.txt # `<token>\t<id>` vocabulary file
File separator
Individual files are concatenated with the sentinel line:
# <FILESEP>
Anything following the sentinel until the next sentinel (or EOF) is the source code of one file.
Dataset variants
1. Licensed Subset (mega_licensed_corpus_redacted.txt)
β’ 53 K permissively-licensed files (MIT/BSD/Apache/ISC/Unlicense).
β’ All API keys & credentials removed.
β’ Ready for redistribution & commercial use (respect upstream NOTICE files).
2. Elaborated Collection (python_files_elaborated.txt)
β’ 186 K files from a much larger crawl.
β’ Contains GPL / LGPL / AGPL and other copyleft licenses.
β’ Shipped as URL list + metadata CSV; you must download the files yourself
(datasets.load_dataset streaming, wget, etc.).
β’ No license filtering or secret-redaction performed β use with caution.
When first loading the dataset, decide which variant aligns with your use case (e.g. proprietary model training β Licensed Subset only).
Collection methodology
Repository discovery
- Queried GitHub REST API for projects with β₯ 10 stars
(earlier iterations used 100+, later expanded for coverage). - Only repositories with primary language Python and last commit β₯ 2015.
- Queried GitHub REST API for projects with β₯ 10 stars
File filtering
- Retain files whose size β [1 KB, 100 KB].
- Exclude common build/packaging scripts (
setup.py,__init__.py, etc.).
License compliance
- Allowed: MIT, Apache-2.0, BSD-2/3-Clause, ISC, Unlicense.
- GPL, LGPL, AGPL and proprietary licenses were excluded.
Deduplication
- Unique file SHA hashes; duplicates skipped.
Formatting & cleaning
- Formatted with autopep8 to normalise whitespace.
- Custom script removed trailing whitespace & normalised newlines.
Secret redaction
truffleHog+ custom regex pass removed >150 active credentials.- Redacted corpus stored as
mega_licensed_corpus_redacted.txt.
Custom tokenisation
The accompanying custom_tokens_vocab.txt implements a Python-aware
sub-token scheme:
- Strip doc-strings & comments.
- Split on:
- Camel-Case boundaries (
CamelβCamel,Case) - Underscores, spaces
- Indentation & newlines (preserved as
<newline>token)
- Camel-Case boundaries (
- Rare tokens (frequency < 10) were dropped β 443 k vocabulary.
Example:
def helloWorld(value):
return value + 1
tokenises to:
def hello world ( value ) <newline> return value + 1 <newline>
Usage
from datasets import load_dataset
ds = load_dataset("jblitzar/github-python", split="train")
print(ds[0]["code"][:300]) # raw source code
If you prefer token level examples (small reasons: memory), map the tokenizer:
from tokenizers import Tokenizer
tok = Tokenizer.from_file("custom_tokens_vocab.txt")
def encode(ex):
ex["input_ids"] = tok.encode(ex["code"]).ids
return ex
ds = ds.map(encode, remove_columns=["code"])
Ethical considerations & limitations
- Licenses respected β only permissive licenses included; retain NOTICE files when redistributing derivative works.
- Secrets removed β automated & manual audits performed, yet users must not assume zero secrets; re-audit before public deployments.
- Code quality β projects vary in style & correctness. Generated models may replicate bugs or vulnerable patterns.
Citation
If you use this dataset, please cite:
@misc{github-python-2024,
author = {JBlitzar},
title = {GitHub-Python: A Permissively Licensed Corpus of Python Code},
year = {2024},
howpublished = {\url{https://huggingface.co/datasets/jblitzar/github-python}},
note = {Version 1.0}
}
License
Dataset card and aggregation scripts: GPLv3.
Each code snippet remains under its original repository license (MIT,
Apache-2.0, BSD, ISC, etc.). Users must comply with upstream notices when
redistributing code or derivatives.