File size: 2,789 Bytes
b442365 1f0cf9e 49889c0 1f0cf9e 49889c0 261813a 1f0cf9e 261813a 1f0cf9e 49889c0 261813a 1f0cf9e 7efa459 1f0cf9e 261813a 49889c0 261813a 49889c0 7efa459 1f0cf9e 261813a 49889c0 1f0cf9e 49889c0 261813a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
# Original Dataset + Tokenized Data + (Buggy + Fixed Embedding Pairs) + Difference Embeddings
## Overview
This repository contains 4 related datasets for training a transformation from buggy to fixed code embeddings:
## Datasets Included
### 1. Original Dataset (`train-00000-of-00001.parquet`)
- **Description**: Legacy RunBugRun Dataset
- **Format**: Parquet file with buggy-fixed code pairs, bug labels, and language
- **Size**: 456,749 samples
- **Load with**:
```python
from datasets import load_dataset
dataset = load_dataset(
"ASSERT-KTH/RunBugRun-Final",
split="train"
)
buggy = dataset['buggy_code']
fixed = dataset['fixed_code']
```
### 2. Difference Embeddings (`diff_embeddings_chunk_XXXX.pkl`)
- **Description**: ModernBERT-large embeddings for buggy-fixed pairs. The difference is Fixed embedding - Buggy embedding. 1024 dimensional vector.
- **Format**: Pickle file
- **Dimensions**: 456,749 × 1024, split among the different files, most 20000, last one shorter.
- **Load with**:
```python
from huggingface_hub import hf_hub_download
import pickle
repo_id = "ASSERT-KTH/RunBugRun-Final"
diff_embeddings = []
for chunk_num in range(23):
file_path = hf_hub_download(
repo_id=repo_id,
filename=f"Embeddings_RBR/diff_embeddings/diff_embeddings_chunk_{chunk_num:04d}.pkl",
repo_type="dataset"
)
with open(file_path, 'rb') as f:
data = pickle.load(f)
diff_embeddings.extend(data.tolist())
```
### 3. Tokens (`chunk_XXXX.pkl`)
- **Description**: Original Dataset tokenized, pairs of Buggy and Fixed code.
- **Format**: Pickle file
- **Load with**:
```python
from huggingface_hub import hf_hub_download
import pickle
repo_id = "ASSERT-KTH/RunBugRun-Final"
tokenized_data = []
for chunk_num in range(23):
file_path = hf_hub_download(
repo_id=repo_id,
filename=f"Embeddings_RBR/tokenized_data/chunk_{chunk_num:04d}.pkl",
repo_type="dataset"
)
with open(file_path, 'rb') as f:
data = pickle.load(f)
tokenized_data.extend(data)
```
### 4. Buggy + Fixed Embeddings (`buggy_fixed_embeddings_chunk_XXXX.pkl`)
- **Description**: Preprocessed tokenized sequences
- **Format**: Pickle file
- **Load with**:
```python
from huggingface_hub import hf_hub_download
import pickle
repo_id = "ASSERT-KTH/RunBugRun-Final"
buggy_list = []
fixed_list = []
for chunk_num in range(23):
file_path = hf_hub_download(
repo_id=repo_id,
filename=f"Embeddings_RBR/buggy_fixed_embeddings/buggy_fixed_embeddings_chunk_{chunk_num:04d}.pkl",
repo_type="dataset"
)
with open(file_path, 'rb') as f:
data = pickle.load(f)
buggy_list.extend(data['buggy_embeddings'].tolist())
fixed_list.extend(data['fixed_embeddings'].tolist())
``` |