Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -8,11 +8,12 @@ tags:
|
|
| 8 |
- pretraining
|
| 9 |
- web-data
|
| 10 |
- fineweb
|
|
|
|
| 11 |
size_categories:
|
| 12 |
- 1B<n<10B
|
| 13 |
---
|
| 14 |
|
| 15 |
-
# FineWeb-6B: First
|
| 16 |
|
| 17 |
A curated subset of the [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) dataset containing the first 6 billion tokens, designed for efficient language model pre-training experiments.
|
| 18 |
|
|
@@ -39,7 +40,7 @@ This dataset contains high-quality web text data suitable for pre-training small
|
|
| 39 |
from datasets import load_dataset
|
| 40 |
|
| 41 |
# Load the parquet file
|
| 42 |
-
dataset = load_dataset("
|
| 43 |
```
|
| 44 |
|
| 45 |
### Loading Pre-tokenized Data
|
|
@@ -63,7 +64,7 @@ print(f"Validation tokens: {len(val_data):,}")
|
|
| 63 |
from transformers import PreTrainedTokenizerFast
|
| 64 |
|
| 65 |
tokenizer = PreTrainedTokenizerFast.from_pretrained(
|
| 66 |
-
"
|
| 67 |
subfolder="tokenized"
|
| 68 |
)
|
| 69 |
|
|
@@ -128,7 +129,7 @@ The binary files contain:
|
|
| 128 |
|
| 129 |
## Training a Model
|
| 130 |
|
| 131 |
-
This dataset was used to train [
|
| 132 |
|
| 133 |
### Example Training Loop
|
| 134 |
|
|
@@ -165,12 +166,13 @@ The tokenizer is a byte-level BPE (Byte Pair Encoding) tokenizer with:
|
|
| 165 |
If you use this dataset, please cite the original FineWeb dataset:
|
| 166 |
|
| 167 |
```bibtex
|
| 168 |
-
@
|
| 169 |
-
|
| 170 |
-
title
|
| 171 |
-
|
| 172 |
-
|
| 173 |
-
|
|
|
|
| 174 |
}
|
| 175 |
```
|
| 176 |
|
|
@@ -181,4 +183,4 @@ This dataset is released under the [ODC-BY](https://opendatacommons.org/licenses
|
|
| 181 |
## Acknowledgments
|
| 182 |
|
| 183 |
- Original dataset: [HuggingFaceFW/fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
|
| 184 |
-
- Pre-training project: [
|
|
|
|
| 8 |
- pretraining
|
| 9 |
- web-data
|
| 10 |
- fineweb
|
| 11 |
+
- text-generation
|
| 12 |
size_categories:
|
| 13 |
- 1B<n<10B
|
| 14 |
---
|
| 15 |
|
| 16 |
+
# FineWeb-6B: First 6B Tokens
|
| 17 |
|
| 18 |
A curated subset of the [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) dataset containing the first 6 billion tokens, designed for efficient language model pre-training experiments.
|
| 19 |
|
|
|
|
| 40 |
from datasets import load_dataset
|
| 41 |
|
| 42 |
# Load the parquet file
|
| 43 |
+
dataset = load_dataset("weights-and-wires/fineweb-6b")
|
| 44 |
```
|
| 45 |
|
| 46 |
### Loading Pre-tokenized Data
|
|
|
|
| 64 |
from transformers import PreTrainedTokenizerFast
|
| 65 |
|
| 66 |
tokenizer = PreTrainedTokenizerFast.from_pretrained(
|
| 67 |
+
"weights-and-wires/fineweb-6b",
|
| 68 |
subfolder="tokenized"
|
| 69 |
)
|
| 70 |
|
|
|
|
| 129 |
|
| 130 |
## Training a Model
|
| 131 |
|
| 132 |
+
This dataset was used to train [weights-and-wires/smol-llama](https://huggingface.co/weights-and-wires/smol-llama), a 360M parameter LLaMA-style model. See that repository for training code and details.
|
| 133 |
|
| 134 |
### Example Training Loop
|
| 135 |
|
|
|
|
| 166 |
If you use this dataset, please cite the original FineWeb dataset:
|
| 167 |
|
| 168 |
```bibtex
|
| 169 |
+
@inproceedings{
|
| 170 |
+
penedo2024the,
|
| 171 |
+
title={The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale},
|
| 172 |
+
author={Guilherme Penedo and Hynek Kydl{\'\i}{\v{c}}ek and Loubna Ben allal and Anton Lozhkov and Margaret Mitchell and Colin Raffel and Leandro Von Werra and Thomas Wolf},
|
| 173 |
+
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
|
| 174 |
+
year={2024},
|
| 175 |
+
url={https://openreview.net/forum?id=n6SCkn2QaG}
|
| 176 |
}
|
| 177 |
```
|
| 178 |
|
|
|
|
| 183 |
## Acknowledgments
|
| 184 |
|
| 185 |
- Original dataset: [HuggingFaceFW/fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
|
| 186 |
+
- Pre-training project: [weights-and-wires/smol-llama](https://huggingface.co/weights-and-wires/smol-llama)
|