POISONX davanstrien HF Staff commited on
Commit
c8f983f
·
0 Parent(s):

Duplicate from unsloth/jobs

Browse files

Co-authored-by: Daniel van Strien <[email protected]>

Files changed (6) hide show
  1. .gitattributes +59 -0
  2. README.md +172 -0
  3. continued-pretraining.py +411 -0
  4. sft-gemma3-vlm.py +377 -0
  5. sft-lfm2.5.py +539 -0
  6. sft-qwen3-vl.py +596 -0
.gitattributes ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ viewer: false
3
+ tags:
4
+ - uv-script
5
+ - unsloth
6
+ - training
7
+ - hf-jobs
8
+ - vlm
9
+ - fine-tuning
10
+ ---
11
+
12
+ # 🦥 Unsloth Training Scripts for HF Jobs
13
+
14
+ UV scripts for fine-tuning LLMs and VLMs using [Unsloth](https://github.com/unslothai/unsloth) on [HF Jobs](https://huggingface.co/docs/hub/jobs) (on-demand cloud GPUs). UV handles dependency installation automatically, so you can run these scripts directly without any local setup.
15
+
16
+ These scripts can also be used or adapted by agents to train models for you.
17
+
18
+ ## Prerequisites
19
+
20
+ - A Hugging Face account
21
+ - The [HF CLI](https://huggingface.co/docs/huggingface_hub/main/en/guides/cli) installed and authenticated (`hf auth login`)
22
+ - A dataset on the Hub in the appropriate format (see format requirements below). A strong LLM agent can often convert your data into the right format if needed.
23
+
24
+ ## Data Formats
25
+
26
+ ### LLM Fine-tuning (SFT)
27
+
28
+ Requires conversation data in ShareGPT or similar format:
29
+
30
+ ```python
31
+ {
32
+ "messages": [
33
+ {"from": "human", "value": "What is the capital of France?"},
34
+ {"from": "gpt", "value": "The capital of France is Paris."}
35
+ ]
36
+ }
37
+ ```
38
+
39
+ The script auto-converts common formats (ShareGPT, Alpaca, etc.) via `standardize_data_formats`. See [mlabonne/FineTome-100k](https://huggingface.co/datasets/mlabonne/FineTome-100k) for a working dataset example.
40
+
41
+ ### VLM Fine-tuning
42
+
43
+ Requires `images` and `messages` columns:
44
+
45
+ ```python
46
+ {
47
+ "images": [<PIL.Image>], # List of images
48
+ "messages": [
49
+ {
50
+ "role": "user",
51
+ "content": [
52
+ {"type": "image"},
53
+ {"type": "text", "text": "What's in this image?"}
54
+ ]
55
+ },
56
+ {
57
+ "role": "assistant",
58
+ "content": [
59
+ {"type": "text", "text": "A golden retriever playing fetch in a park."}
60
+ ]
61
+ }
62
+ ]
63
+ }
64
+ ```
65
+
66
+ See [davanstrien/iconclass-vlm-sft](https://huggingface.co/datasets/davanstrien/iconclass-vlm-sft) for a working dataset example, and [davanstrien/iconclass-vlm-qwen3-best](https://huggingface.co/davanstrien/iconclass-vlm-qwen3-best) for a model trained with these scripts.
67
+
68
+ ### Continued Pretraining
69
+
70
+ Any dataset with a text column:
71
+
72
+ ```python
73
+ {"text": "Your domain-specific text here..."}
74
+ ```
75
+
76
+ Use `--text-column` if your column has a different name.
77
+
78
+ ## Usage
79
+
80
+ View available options for any script:
81
+
82
+ ```bash
83
+ uv run https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-lfm2.5.py --help
84
+ ```
85
+
86
+ ### LLM fine-tuning
87
+
88
+ Fine-tune [LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct), a compact and efficient text model from Liquid AI:
89
+
90
+ ```bash
91
+ hf jobs uv run \
92
+ https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-lfm2.5.py \
93
+ --flavor a10g-small --secrets HF_TOKEN --timeout 4h \
94
+ -- --dataset mlabonne/FineTome-100k \
95
+ --num-epochs 1 \
96
+ --eval-split 0.2 \
97
+ --output-repo your-username/lfm-finetuned
98
+ ```
99
+
100
+ ### VLM fine-tuning
101
+
102
+ ```bash
103
+ hf jobs uv run \
104
+ https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-qwen3-vl.py \
105
+ --flavor a100-large --secrets HF_TOKEN \
106
+ -- --dataset your-username/dataset \
107
+ --trackio-space your-username/trackio \
108
+ --output-repo your-username/my-model
109
+ ```
110
+
111
+ ### Continued pretraining
112
+
113
+ ```bash
114
+ hf jobs uv run \
115
+ https://huggingface.co/datasets/unsloth/jobs/raw/main/continued-pretraining.py \
116
+ --flavor a100-large --secrets HF_TOKEN \
117
+ -- --dataset your-username/domain-corpus \
118
+ --text-column content \
119
+ --max-steps 1000 \
120
+ --output-repo your-username/domain-llm
121
+ ```
122
+
123
+ ### With Trackio monitoring
124
+
125
+ ```bash
126
+ hf jobs uv run \
127
+ https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-lfm2.5.py \
128
+ --flavor a10g-small --secrets HF_TOKEN \
129
+ -- --dataset mlabonne/FineTome-100k \
130
+ --trackio-space your-username/trackio \
131
+ --output-repo your-username/lfm-finetuned
132
+ ```
133
+
134
+ ## Scripts
135
+
136
+ | Script | Base Model | Task |
137
+ | ------------------------------------------------------ | -------------------- | ----------------------------- |
138
+ | [`sft-lfm2.5.py`](sft-lfm2.5.py) | LFM2.5-1.2B-Instruct | LLM fine-tuning (recommended) |
139
+ | [`sft-qwen3-vl.py`](sft-qwen3-vl.py) | Qwen3-VL-8B | VLM fine-tuning |
140
+ | [`sft-gemma3-vlm.py`](sft-gemma3-vlm.py) | Gemma 3 4B | VLM fine-tuning (smaller) |
141
+ | [`continued-pretraining.py`](continued-pretraining.py) | Qwen3-0.6B | Domain adaptation |
142
+
143
+ ## Common Options
144
+
145
+ | Option | Description | Default |
146
+ | ------------------------- | -------------------------------------- | ------------ |
147
+ | `--dataset` | HF dataset ID | _required_ |
148
+ | `--output-repo` | Where to save trained model | _required_ |
149
+ | `--max-steps` | Number of training steps | 500 |
150
+ | `--num-epochs` | Train for N epochs instead of steps | - |
151
+ | `--eval-split` | Fraction for evaluation (e.g., 0.2) | 0 (disabled) |
152
+ | `--batch-size` | Per-device batch size | 2 |
153
+ | `--gradient-accumulation` | Gradient accumulation steps | 4 |
154
+ | `--lora-r` | LoRA rank | 16 |
155
+ | `--learning-rate` | Learning rate | 2e-4 |
156
+ | `--merge-model` | Upload merged model (not just adapter) | false |
157
+ | `--trackio-space` | HF Space for live monitoring | - |
158
+ | `--run-name` | Custom name for Trackio run | auto |
159
+
160
+ ## Tips
161
+
162
+ - Use `--max-steps 10` to verify everything works before a full run
163
+ - `--eval-split 0.1` helps detect overfitting
164
+ - Run `hf jobs hardware` to see GPU pricing (A100-large ~$2.50/hr, L40S ~$1.80/hr)
165
+ - Add `--streaming` for very large datasets
166
+ - First training step may take a few minutes (CUDA kernel compilation)
167
+
168
+ ## Links
169
+
170
+ - [HF Jobs Quickstart](https://huggingface.co/docs/hub/jobs-quickstart)
171
+ - [Unsloth Documentation](https://docs.unsloth.ai/)
172
+ - [UV Scripts Guide](https://docs.astral.sh/uv/guides/scripts/)
continued-pretraining.py ADDED
@@ -0,0 +1,411 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.10"
3
+ # dependencies = [
4
+ # "unsloth",
5
+ # "datasets",
6
+ # "trl",
7
+ # "huggingface_hub[hf_transfer]",
8
+ # "trackio",
9
+ # ]
10
+ # ///
11
+ """
12
+ Continued pretraining of language models using streaming datasets.
13
+
14
+ Demonstrates domain adaptation with streaming - no disk space needed.
15
+ Uses FineWeb-2's Latin subset as default example (1.47M texts, ~1.7GB).
16
+
17
+ Run locally (if you have a GPU):
18
+ uv run continued-pretraining.py --output-repo your-username/qwen-latin
19
+
20
+ Run on HF Jobs:
21
+ hf jobs uv run \
22
+ https://huggingface.co/datasets/unsloth/jobs/raw/main/continued-pretraining.py \
23
+ --flavor a100-large --secrets HF_TOKEN \
24
+ -- --max-steps 1000 --output-repo your-username/qwen-latin
25
+
26
+ With custom dataset:
27
+ uv run continued-pretraining.py \
28
+ --dataset your-username/domain-texts \
29
+ --text-column content \
30
+ --max-steps 1000 \
31
+ --output-repo your-username/domain-llm
32
+ """
33
+
34
+ import argparse
35
+ import logging
36
+ import os
37
+ import sys
38
+ import time
39
+
40
+ # Force unbuffered output for HF Jobs logs
41
+ sys.stdout.reconfigure(line_buffering=True)
42
+ sys.stderr.reconfigure(line_buffering=True)
43
+
44
+ logging.basicConfig(
45
+ level=logging.INFO,
46
+ format="%(asctime)s - %(levelname)s - %(message)s",
47
+ )
48
+ logger = logging.getLogger(__name__)
49
+
50
+
51
+ def check_cuda():
52
+ """Check CUDA availability and exit if not available."""
53
+ import torch
54
+
55
+ if not torch.cuda.is_available():
56
+ logger.error("CUDA is not available. This script requires a GPU.")
57
+ logger.error("Run on a machine with a CUDA-capable GPU or use HF Jobs:")
58
+ logger.error(
59
+ " hf jobs uv run https://huggingface.co/datasets/unsloth/jobs/raw/main/continued-pretraining.py --flavor a100-large ..."
60
+ )
61
+ sys.exit(1)
62
+ logger.info(f"CUDA available: {torch.cuda.get_device_name(0)}")
63
+
64
+
65
+ def parse_args():
66
+ parser = argparse.ArgumentParser(
67
+ description="Continued pretraining of LLMs using streaming datasets",
68
+ formatter_class=argparse.RawDescriptionHelpFormatter,
69
+ epilog="""
70
+ Examples:
71
+ # Train on Latin (default)
72
+ uv run continued-pretraining.py \\
73
+ --max-steps 500 \\
74
+ --output-repo username/qwen-latin
75
+
76
+ # Custom dataset
77
+ uv run continued-pretraining.py \\
78
+ --dataset your-username/domain-texts \\
79
+ --text-column content \\
80
+ --max-steps 1000 \\
81
+ --output-repo username/domain-llm
82
+
83
+ # HF Jobs with monitoring
84
+ hf jobs uv run \\
85
+ https://huggingface.co/datasets/unsloth/jobs/raw/main/continued-pretraining.py \\
86
+ --flavor a100-large --secrets HF_TOKEN \\
87
+ -- --max-steps 1000 --trackio-space username/trackio --output-repo username/qwen-latin
88
+ """,
89
+ )
90
+ parser.add_argument(
91
+ "--base-model",
92
+ default="unsloth/Qwen3-0.6B-Base-unsloth-bnb-4bit",
93
+ help="Base model to fine-tune (default: unsloth/Qwen3-0.6B-Base-unsloth-bnb-4bit)",
94
+ )
95
+ parser.add_argument(
96
+ "--dataset",
97
+ default="HuggingFaceFW/fineweb-2",
98
+ help="Dataset for continued pretraining (default: HuggingFaceFW/fineweb-2)",
99
+ )
100
+ parser.add_argument(
101
+ "--dataset-config",
102
+ default="lat_Latn",
103
+ help="Dataset config/subset name (default: lat_Latn for Latin)",
104
+ )
105
+ parser.add_argument(
106
+ "--text-column",
107
+ default="text",
108
+ help="Column containing text data (default: text)",
109
+ )
110
+ parser.add_argument(
111
+ "--output-repo",
112
+ required=True,
113
+ help="HF Hub repo to push model to (e.g., 'username/qwen-latin')",
114
+ )
115
+ parser.add_argument(
116
+ "--max-steps",
117
+ type=int,
118
+ default=500,
119
+ help="Number of training steps (default: 500)",
120
+ )
121
+ parser.add_argument(
122
+ "--batch-size",
123
+ type=int,
124
+ default=4,
125
+ help="Per-device batch size (default: 4)",
126
+ )
127
+ parser.add_argument(
128
+ "--gradient-accumulation",
129
+ type=int,
130
+ default=4,
131
+ help="Gradient accumulation steps (default: 4)",
132
+ )
133
+ parser.add_argument(
134
+ "--learning-rate",
135
+ type=float,
136
+ default=2e-4,
137
+ help="Learning rate (default: 2e-4)",
138
+ )
139
+ parser.add_argument(
140
+ "--max-seq-length",
141
+ type=int,
142
+ default=2048,
143
+ help="Maximum sequence length (default: 2048)",
144
+ )
145
+ parser.add_argument(
146
+ "--lora-r",
147
+ type=int,
148
+ default=16,
149
+ help="LoRA rank (default: 16)",
150
+ )
151
+ parser.add_argument(
152
+ "--save-local",
153
+ default="pretraining-output",
154
+ help="Local directory to save model (default: pretraining-output)",
155
+ )
156
+ parser.add_argument(
157
+ "--trackio-space",
158
+ default=None,
159
+ help="HF Space for Trackio dashboard (e.g., 'username/trackio')",
160
+ )
161
+ return parser.parse_args()
162
+
163
+
164
+ def main():
165
+ args = parse_args()
166
+
167
+ print("=" * 70)
168
+ print("Continued Pretraining with Streaming Datasets")
169
+ print("=" * 70)
170
+ print(f"\nConfiguration:")
171
+ print(f" Base model: {args.base_model}")
172
+ print(f" Dataset: {args.dataset} ({args.dataset_config})")
173
+ print(f" Text column: {args.text_column}")
174
+ print(f" Max steps: {args.max_steps}")
175
+ print(
176
+ f" Batch size: {args.batch_size} x {args.gradient_accumulation} = {args.batch_size * args.gradient_accumulation}"
177
+ )
178
+ print(f" Learning rate: {args.learning_rate}")
179
+ print(f" LoRA rank: {args.lora_r}")
180
+ print(f" Output repo: {args.output_repo}")
181
+ print(f" Trackio space: {args.trackio_space or '(not configured)'}")
182
+ print()
183
+
184
+ # Check CUDA before heavy imports
185
+ check_cuda()
186
+
187
+ # Enable fast transfers
188
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
189
+
190
+ # Set Trackio space if provided
191
+ if args.trackio_space:
192
+ os.environ["TRACKIO_SPACE_ID"] = args.trackio_space
193
+ logger.info(
194
+ f"Trackio dashboard: https://huggingface.co/spaces/{args.trackio_space}"
195
+ )
196
+
197
+ # Import heavy dependencies
198
+ from unsloth import FastLanguageModel
199
+ from datasets import load_dataset
200
+ from trl import SFTTrainer, SFTConfig
201
+ from huggingface_hub import login
202
+
203
+ # Login to Hub
204
+ token = os.environ.get("HF_TOKEN")
205
+ if token:
206
+ login(token=token)
207
+ logger.info("Logged in to Hugging Face Hub")
208
+ else:
209
+ logger.warning("HF_TOKEN not set - model upload may fail")
210
+
211
+ # 1. Load model
212
+ print("\n[1/5] Loading model...")
213
+ start = time.time()
214
+
215
+ model, tokenizer = FastLanguageModel.from_pretrained(
216
+ args.base_model,
217
+ max_seq_length=args.max_seq_length,
218
+ load_in_4bit=True,
219
+ )
220
+
221
+ model = FastLanguageModel.get_peft_model(
222
+ model,
223
+ r=args.lora_r,
224
+ lora_alpha=args.lora_r * 2,
225
+ lora_dropout=0,
226
+ target_modules=[
227
+ "q_proj",
228
+ "k_proj",
229
+ "v_proj",
230
+ "o_proj",
231
+ "gate_proj",
232
+ "up_proj",
233
+ "down_proj",
234
+ ],
235
+ bias="none",
236
+ use_gradient_checkpointing="unsloth",
237
+ random_state=3407,
238
+ )
239
+ print(f"Model loaded in {time.time() - start:.1f}s")
240
+
241
+ # 2. Load streaming dataset
242
+ print(f"\n[2/5] Loading streaming dataset ({args.dataset})...")
243
+ start = time.time()
244
+
245
+ # Handle dataset with or without config
246
+ if args.dataset_config:
247
+ dataset = load_dataset(
248
+ args.dataset,
249
+ name=args.dataset_config,
250
+ split="train",
251
+ streaming=True,
252
+ )
253
+ else:
254
+ dataset = load_dataset(
255
+ args.dataset,
256
+ split="train",
257
+ streaming=True,
258
+ )
259
+
260
+ # Peek at the data
261
+ sample = next(iter(dataset))
262
+ text_preview = (
263
+ sample[args.text_column][:100]
264
+ if args.text_column in sample
265
+ else "(column not found)"
266
+ )
267
+ print(f"Dataset ready in {time.time() - start:.1f}s")
268
+ print(f" Sample: {text_preview}...")
269
+
270
+ # Reload dataset (consumed one sample above)
271
+ if args.dataset_config:
272
+ dataset = load_dataset(
273
+ args.dataset,
274
+ name=args.dataset_config,
275
+ split="train",
276
+ streaming=True,
277
+ )
278
+ else:
279
+ dataset = load_dataset(
280
+ args.dataset,
281
+ split="train",
282
+ streaming=True,
283
+ )
284
+
285
+ # 3. Format dataset
286
+ print("\n[3/5] Preparing dataset...")
287
+
288
+ text_column = args.text_column
289
+
290
+ def format_text(example):
291
+ return {"text": example[text_column] + tokenizer.eos_token}
292
+
293
+ formatted_dataset = dataset.map(format_text)
294
+
295
+ # 4. Train
296
+ print(f"\n[4/5] Training for {args.max_steps} steps...")
297
+ start = time.time()
298
+
299
+ trainer = SFTTrainer(
300
+ model=model,
301
+ tokenizer=tokenizer,
302
+ train_dataset=formatted_dataset,
303
+ args=SFTConfig(
304
+ per_device_train_batch_size=args.batch_size,
305
+ gradient_accumulation_steps=args.gradient_accumulation,
306
+ warmup_steps=min(10, args.max_steps // 10),
307
+ max_steps=args.max_steps,
308
+ learning_rate=args.learning_rate,
309
+ logging_steps=max(1, args.max_steps // 20),
310
+ optim="adamw_8bit",
311
+ weight_decay=0.01,
312
+ lr_scheduler_type="linear",
313
+ seed=3407,
314
+ output_dir=args.save_local,
315
+ report_to="trackio",
316
+ run_name=f"pretraining-{args.max_steps}steps",
317
+ dataset_text_field="text",
318
+ max_seq_length=args.max_seq_length,
319
+ packing=False,
320
+ ),
321
+ )
322
+
323
+ trainer.train()
324
+ train_time = time.time() - start
325
+
326
+ print(f"\nTraining completed in {train_time / 60:.1f} minutes")
327
+ print(f" Speed: {args.max_steps / train_time:.2f} steps/s")
328
+
329
+ # 5. Save and push
330
+ print("\n[5/5] Saving model...")
331
+
332
+ # Save locally
333
+ model.save_pretrained(args.save_local)
334
+ tokenizer.save_pretrained(args.save_local)
335
+ print(f"Saved locally to {args.save_local}/")
336
+
337
+ # Push to hub
338
+ print(f"\nPushing to {args.output_repo}...")
339
+ model.push_to_hub(args.output_repo, tokenizer=tokenizer)
340
+ print(f"Model available at: https://huggingface.co/{args.output_repo}")
341
+
342
+ # Update model card metadata with dataset info
343
+ from huggingface_hub import metadata_update
344
+
345
+ metadata_update(args.output_repo, {"datasets": [args.dataset]}, overwrite=True)
346
+ print(f" Model card updated with dataset: {args.dataset}")
347
+
348
+ # Quick inference test
349
+ print("\n" + "=" * 70)
350
+ print("Quick inference test:")
351
+ print("=" * 70)
352
+
353
+ FastLanguageModel.for_inference(model)
354
+
355
+ # Use a prompt appropriate to the dataset
356
+ if "lat_Latn" in (args.dataset_config or ""):
357
+ prompt = "Lingua Latina est"
358
+ else:
359
+ prompt = "The quick brown fox"
360
+
361
+ inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
362
+ outputs = model.generate(
363
+ **inputs,
364
+ max_new_tokens=64,
365
+ temperature=0.7,
366
+ do_sample=True,
367
+ )
368
+ generated = tokenizer.decode(outputs[0], skip_special_tokens=True)
369
+
370
+ print(f"\nPrompt: {prompt}")
371
+ print(f"Generated: {generated}")
372
+
373
+ print("\n" + "=" * 70)
374
+ print("Done!")
375
+ print("=" * 70)
376
+
377
+
378
+ if __name__ == "__main__":
379
+ # Show example usage if no arguments
380
+ if len(sys.argv) == 1:
381
+ print("=" * 70)
382
+ print("Continued Pretraining with Streaming Datasets")
383
+ print("=" * 70)
384
+ print("\nContinued pretraining for domain adaptation.")
385
+ print("Streams data directly from the Hub - no disk space needed.")
386
+ print("\nFeatures:")
387
+ print(" - ~60% less VRAM with Unsloth optimizations")
388
+ print(" - 2x faster training vs standard methods")
389
+ print(" - Trackio integration for monitoring")
390
+ print(" - Works with any text dataset")
391
+ print("\nDefault example (Latin):")
392
+ print("\n uv run continued-pretraining.py \\")
393
+ print(" --max-steps 500 \\")
394
+ print(" --output-repo your-username/qwen-latin")
395
+ print("\nHF Jobs example:")
396
+ print("\n hf jobs uv run \\")
397
+ print(
398
+ " https://huggingface.co/datasets/unsloth/jobs/raw/main/continued-pretraining.py \\"
399
+ )
400
+ print(" --flavor a100-large --secrets HF_TOKEN \\")
401
+ print(" -- --max-steps 1000 --output-repo your-username/qwen-latin")
402
+ print("\nCustom dataset:")
403
+ print("\n uv run continued-pretraining.py \\")
404
+ print(" --dataset your-username/domain-texts \\")
405
+ print(" --text-column content \\")
406
+ print(" --output-repo your-username/domain-llm")
407
+ print("\nFor full help: uv run continued-pretraining.py --help")
408
+ print("=" * 70)
409
+ sys.exit(0)
410
+
411
+ main()
sft-gemma3-vlm.py ADDED
@@ -0,0 +1,377 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.10"
3
+ # dependencies = [
4
+ # "unsloth",
5
+ # "datasets",
6
+ # "trl",
7
+ # "huggingface_hub[hf_transfer]",
8
+ # "trackio",
9
+ # "transformers==4.56.2",
10
+ # "trl==0.22.2",
11
+ # ]
12
+ # ///
13
+ """
14
+ Fine-tune Gemma 3 4B Vision Language Model using Unsloth optimizations.
15
+
16
+ Streams data directly from the Hub - no disk space needed for massive VLM datasets.
17
+ Uses Unsloth for ~60% less VRAM and 2x faster training.
18
+
19
+ Run locally (if you have a GPU):
20
+ uv run sft-gemma3-vlm.py \
21
+ --max-steps 100 \
22
+ --output-repo your-username/vlm-test
23
+
24
+ Run on HF Jobs:
25
+ hf jobs uv run \
26
+ https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-gemma3-vlm.py \
27
+ --flavor a100-large --secrets HF_TOKEN \
28
+ -- --max-steps 500 --output-repo your-username/vlm-finetuned
29
+
30
+ With Trackio dashboard:
31
+ uv run sft-gemma3-vlm.py \
32
+ --max-steps 500 \
33
+ --output-repo your-username/vlm-finetuned \
34
+ --trackio-space your-username/trackio
35
+ """
36
+
37
+ import argparse
38
+ import logging
39
+ import os
40
+ import sys
41
+ import time
42
+
43
+ # Force unbuffered output for HF Jobs logs
44
+ sys.stdout.reconfigure(line_buffering=True)
45
+ sys.stderr.reconfigure(line_buffering=True)
46
+
47
+ logging.basicConfig(
48
+ level=logging.INFO,
49
+ format="%(asctime)s - %(levelname)s - %(message)s",
50
+ )
51
+ logger = logging.getLogger(__name__)
52
+
53
+
54
+ def check_cuda():
55
+ """Check CUDA availability and exit if not available."""
56
+ import torch
57
+
58
+ if not torch.cuda.is_available():
59
+ logger.error("CUDA is not available. This script requires a GPU.")
60
+ logger.error("Run on a machine with a CUDA-capable GPU or use HF Jobs:")
61
+ logger.error(
62
+ " hf jobs uv run https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-gemma3-vlm.py --flavor a100-large ..."
63
+ )
64
+ sys.exit(1)
65
+ logger.info(f"CUDA available: {torch.cuda.get_device_name(0)}")
66
+
67
+
68
+ def parse_args():
69
+ parser = argparse.ArgumentParser(
70
+ description="Fine-tune Gemma 3 4B VLM with streaming datasets using Unsloth",
71
+ formatter_class=argparse.RawDescriptionHelpFormatter,
72
+ epilog="""
73
+ Examples:
74
+ # Quick test run
75
+ uv run sft-gemma3-vlm.py \\
76
+ --max-steps 50 \\
77
+ --output-repo username/vlm-test
78
+
79
+ # Full training with Trackio monitoring
80
+ uv run sft-gemma3-vlm.py \\
81
+ --max-steps 500 \\
82
+ --output-repo username/vlm-finetuned \\
83
+ --trackio-space username/trackio
84
+
85
+ # Custom dataset
86
+ uv run sft-gemma3-vlm.py \\
87
+ --dataset your-username/your-vlm-dataset \\
88
+ --max-steps 1000 \\
89
+ --output-repo username/custom-vlm
90
+ """,
91
+ )
92
+
93
+ # Model and data
94
+ parser.add_argument(
95
+ "--base-model",
96
+ default="unsloth/gemma-3-4b-pt",
97
+ help="Base VLM model (default: unsloth/gemma-3-4b-pt)",
98
+ )
99
+ parser.add_argument(
100
+ "--dataset",
101
+ default="davanstrien/iconclass-vlm-sft",
102
+ help="Dataset with 'images' and 'messages' columns (default: davanstrien/iconclass-vlm-sft)",
103
+ )
104
+ parser.add_argument(
105
+ "--output-repo",
106
+ required=True,
107
+ help="HF Hub repo to push model to (e.g., 'username/vlm-finetuned')",
108
+ )
109
+
110
+ # Training config
111
+ parser.add_argument(
112
+ "--max-steps",
113
+ type=int,
114
+ default=500,
115
+ help="Training steps (default: 500). Required for streaming datasets.",
116
+ )
117
+ parser.add_argument(
118
+ "--batch-size",
119
+ type=int,
120
+ default=2,
121
+ help="Per-device batch size (default: 2)",
122
+ )
123
+ parser.add_argument(
124
+ "--gradient-accumulation",
125
+ type=int,
126
+ default=4,
127
+ help="Gradient accumulation steps (default: 4). Effective batch = batch-size * this",
128
+ )
129
+ parser.add_argument(
130
+ "--learning-rate",
131
+ type=float,
132
+ default=2e-4,
133
+ help="Learning rate (default: 2e-4)",
134
+ )
135
+ parser.add_argument(
136
+ "--max-seq-length",
137
+ type=int,
138
+ default=2048,
139
+ help="Maximum sequence length (default: 2048)",
140
+ )
141
+
142
+ # LoRA config
143
+ parser.add_argument(
144
+ "--lora-r",
145
+ type=int,
146
+ default=16,
147
+ help="LoRA rank (default: 16). Higher = more capacity but more VRAM",
148
+ )
149
+ parser.add_argument(
150
+ "--lora-alpha",
151
+ type=int,
152
+ default=32,
153
+ help="LoRA alpha (default: 32). Usually 2*r",
154
+ )
155
+
156
+ # Logging
157
+ parser.add_argument(
158
+ "--trackio-space",
159
+ default=None,
160
+ help="HF Space for Trackio dashboard (e.g., 'username/trackio')",
161
+ )
162
+ parser.add_argument(
163
+ "--save-local",
164
+ default="vlm-gemma3-output",
165
+ help="Local directory to save model (default: vlm-gemma3-output)",
166
+ )
167
+
168
+ return parser.parse_args()
169
+
170
+
171
+ def main():
172
+ args = parse_args()
173
+
174
+ print("=" * 70)
175
+ print("Gemma 3 4B VLM Streaming Fine-tuning with Unsloth")
176
+ print("=" * 70)
177
+ print("\nConfiguration:")
178
+ print(f" Base model: {args.base_model}")
179
+ print(f" Dataset: {args.dataset}")
180
+ print(f" Max steps: {args.max_steps}")
181
+ print(
182
+ f" Batch size: {args.batch_size} x {args.gradient_accumulation} = {args.batch_size * args.gradient_accumulation}"
183
+ )
184
+ print(f" Learning rate: {args.learning_rate}")
185
+ print(f" LoRA rank: {args.lora_r}")
186
+ print(f" Output repo: {args.output_repo}")
187
+ print(f" Trackio space: {args.trackio_space or '(not configured)'}")
188
+ print()
189
+
190
+ # Check CUDA before heavy imports
191
+ check_cuda()
192
+
193
+ # Enable fast transfers
194
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
195
+
196
+ # Set Trackio space if provided
197
+ if args.trackio_space:
198
+ os.environ["TRACKIO_SPACE_ID"] = args.trackio_space
199
+ logger.info(
200
+ f"Trackio dashboard: https://huggingface.co/spaces/{args.trackio_space}"
201
+ )
202
+
203
+ # Import heavy dependencies (note: import from unsloth.trainer for VLM)
204
+ from unsloth import FastVisionModel, get_chat_template
205
+ from unsloth.trainer import UnslothVisionDataCollator
206
+ from datasets import load_dataset
207
+ from trl import SFTTrainer, SFTConfig
208
+ from huggingface_hub import login
209
+
210
+ # Login to Hub
211
+ token = os.environ.get("HF_TOKEN")
212
+ if token:
213
+ login(token=token)
214
+ logger.info("Logged in to Hugging Face Hub")
215
+ else:
216
+ logger.warning("HF_TOKEN not set - model upload may fail")
217
+
218
+ # 1. Load model
219
+ print("\n[1/5] Loading model...")
220
+ start = time.time()
221
+
222
+ model, processor = FastVisionModel.from_pretrained(
223
+ args.base_model,
224
+ load_in_4bit=True,
225
+ use_gradient_checkpointing="unsloth",
226
+ )
227
+
228
+ model = FastVisionModel.get_peft_model(
229
+ model,
230
+ finetune_vision_layers=True,
231
+ finetune_language_layers=True,
232
+ finetune_attention_modules=True,
233
+ finetune_mlp_modules=True,
234
+ r=args.lora_r,
235
+ lora_alpha=args.lora_alpha,
236
+ lora_dropout=0,
237
+ bias="none",
238
+ random_state=3407,
239
+ use_rslora=False,
240
+ loftq_config=None,
241
+ target_modules="all-linear",
242
+ )
243
+
244
+ # Apply chat template (required for base models)
245
+ processor = get_chat_template(processor, "gemma-3")
246
+ print(f"Model loaded in {time.time() - start:.1f}s")
247
+
248
+ # 2. Load streaming dataset
249
+ print("\n[2/5] Loading streaming dataset...")
250
+ start = time.time()
251
+
252
+ dataset = load_dataset(
253
+ args.dataset,
254
+ split="train",
255
+ streaming=True,
256
+ )
257
+
258
+ # Peek at first sample to show info
259
+ sample = next(iter(dataset))
260
+ print(f"Dataset ready in {time.time() - start:.1f}s")
261
+ if "messages" in sample:
262
+ print(f" Sample has {len(sample['messages'])} messages")
263
+ if "images" in sample:
264
+ img_count = len(sample["images"]) if isinstance(sample["images"], list) else 1
265
+ print(f" Sample has {img_count} image(s)")
266
+
267
+ # Reload dataset (consumed one sample above)
268
+ dataset = load_dataset(
269
+ args.dataset,
270
+ split="train",
271
+ streaming=True,
272
+ )
273
+
274
+ # 3. Configure trainer
275
+ print("\n[3/5] Configuring trainer...")
276
+
277
+ # Enable training mode
278
+ FastVisionModel.for_training(model)
279
+
280
+ training_config = SFTConfig(
281
+ output_dir=args.save_local,
282
+ per_device_train_batch_size=args.batch_size,
283
+ gradient_accumulation_steps=args.gradient_accumulation,
284
+ gradient_checkpointing=True,
285
+ gradient_checkpointing_kwargs={"use_reentrant": False},
286
+ max_grad_norm=0.3,
287
+ warmup_ratio=0.03,
288
+ max_steps=args.max_steps,
289
+ learning_rate=args.learning_rate,
290
+ logging_steps=max(1, args.max_steps // 20),
291
+ save_strategy="steps",
292
+ optim="adamw_torch_fused",
293
+ weight_decay=0.001,
294
+ lr_scheduler_type="cosine",
295
+ seed=3407,
296
+ # VLM-specific settings (required for Unsloth)
297
+ remove_unused_columns=False,
298
+ dataset_text_field="",
299
+ dataset_kwargs={"skip_prepare_dataset": True},
300
+ max_length=args.max_seq_length,
301
+ # Logging
302
+ report_to="trackio",
303
+ run_name=f"gemma3-vlm-{args.max_steps}steps",
304
+ )
305
+
306
+ trainer = SFTTrainer(
307
+ model=model,
308
+ train_dataset=dataset,
309
+ processing_class=processor.tokenizer,
310
+ data_collator=UnslothVisionDataCollator(model, processor),
311
+ args=training_config,
312
+ )
313
+
314
+ # 4. Train
315
+ print(f"\n[4/5] Training for {args.max_steps} steps...")
316
+ start = time.time()
317
+
318
+ trainer.train()
319
+
320
+ train_time = time.time() - start
321
+ print(f"\nTraining completed in {train_time / 60:.1f} minutes")
322
+ print(f" Speed: {args.max_steps / train_time:.2f} steps/s")
323
+
324
+ # 5. Save and push
325
+ print("\n[5/5] Saving model...")
326
+
327
+ # Save locally
328
+ model.save_pretrained(args.save_local)
329
+ processor.save_pretrained(args.save_local)
330
+ print(f"Saved locally to {args.save_local}/")
331
+
332
+ # Push to Hub
333
+ print(f"\nPushing to {args.output_repo}...")
334
+ model.push_to_hub(args.output_repo)
335
+ processor.push_to_hub(args.output_repo)
336
+ print(f"Model available at: https://huggingface.co/{args.output_repo}")
337
+
338
+ # Update model card metadata with dataset info
339
+ from huggingface_hub import metadata_update
340
+
341
+ metadata_update(args.output_repo, {"datasets": [args.dataset]}, overwrite=True)
342
+ print(f" Model card updated with dataset: {args.dataset}")
343
+
344
+ print("\n" + "=" * 70)
345
+ print("Done!")
346
+ print("=" * 70)
347
+
348
+
349
+ if __name__ == "__main__":
350
+ # Show example usage if no arguments
351
+ if len(sys.argv) == 1:
352
+ print("=" * 70)
353
+ print("Gemma 3 4B VLM Streaming Fine-tuning with Unsloth")
354
+ print("=" * 70)
355
+ print("\nFine-tune Vision-Language Models using streaming datasets.")
356
+ print("Data streams directly from the Hub - no disk space needed.")
357
+ print("\nFeatures:")
358
+ print(" - ~60% less VRAM with Unsloth optimizations")
359
+ print(" - 2x faster training vs standard methods")
360
+ print(" - Trackio integration for monitoring")
361
+ print(" - Works with any VLM dataset in conversation format")
362
+ print("\nExample usage:")
363
+ print("\n uv run sft-gemma3-vlm.py \\")
364
+ print(" --max-steps 500 \\")
365
+ print(" --output-repo your-username/vlm-finetuned")
366
+ print("\nHF Jobs example:")
367
+ print("\n hf jobs uv run \\")
368
+ print(
369
+ " https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-gemma3-vlm.py \\"
370
+ )
371
+ print(" --flavor a100-large --secrets HF_TOKEN \\")
372
+ print(" -- --max-steps 500 --output-repo your-username/vlm-finetuned")
373
+ print("\nFor full help: uv run sft-gemma3-vlm.py --help")
374
+ print("=" * 70)
375
+ sys.exit(0)
376
+
377
+ main()
sft-lfm2.5.py ADDED
@@ -0,0 +1,539 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.10"
3
+ # dependencies = [
4
+ # "unsloth",
5
+ # "datasets",
6
+ # "trl==0.22.2",
7
+ # "huggingface_hub[hf_transfer]",
8
+ # "trackio",
9
+ # "tensorboard",
10
+ # "transformers==4.57.3",
11
+ # ]
12
+ # ///
13
+ """
14
+ Fine-tune LFM2.5-1.2B-Instruct (Liquid Foundation Model) using Unsloth optimizations.
15
+
16
+ Uses Unsloth for ~60% less VRAM and 2x faster training.
17
+ Supports epoch-based or step-based training with optional eval split.
18
+
19
+ Epoch-based training (recommended for full datasets):
20
+ uv run sft-lfm2.5.py \
21
+ --dataset mlabonne/FineTome-100k \
22
+ --num-epochs 1 \
23
+ --eval-split 0.2 \
24
+ --output-repo your-username/lfm-finetuned
25
+
26
+ Run on HF Jobs (1 epoch with eval):
27
+ hf jobs uv run sft-lfm2.5.py \
28
+ --flavor a10g-small --secrets HF_TOKEN --timeout 4h \
29
+ -- --dataset mlabonne/FineTome-100k \
30
+ --num-epochs 1 \
31
+ --eval-split 0.2 \
32
+ --output-repo your-username/lfm-finetuned
33
+
34
+ Step-based training (for quick tests):
35
+ uv run sft-lfm2.5.py \
36
+ --dataset mlabonne/FineTome-100k \
37
+ --max-steps 500 \
38
+ --output-repo your-username/lfm-finetuned
39
+ """
40
+
41
+ import argparse
42
+ import logging
43
+ import os
44
+ import sys
45
+ import time
46
+
47
+ # Force unbuffered output for HF Jobs logs
48
+ sys.stdout.reconfigure(line_buffering=True)
49
+ sys.stderr.reconfigure(line_buffering=True)
50
+
51
+ logging.basicConfig(
52
+ level=logging.INFO,
53
+ format="%(asctime)s - %(levelname)s - %(message)s",
54
+ )
55
+ logger = logging.getLogger(__name__)
56
+
57
+
58
+ def check_cuda():
59
+ """Check CUDA availability and exit if not available."""
60
+ import torch
61
+
62
+ if not torch.cuda.is_available():
63
+ logger.error("CUDA is not available. This script requires a GPU.")
64
+ logger.error("Run on a machine with a CUDA-capable GPU or use HF Jobs:")
65
+ logger.error(" hf jobs uv run sft-lfm2.5.py --flavor a10g-small ...")
66
+ sys.exit(1)
67
+ logger.info(f"CUDA available: {torch.cuda.get_device_name(0)}")
68
+
69
+
70
+ def parse_args():
71
+ parser = argparse.ArgumentParser(
72
+ description="Fine-tune LFM2.5-1.2B-Instruct with Unsloth",
73
+ formatter_class=argparse.RawDescriptionHelpFormatter,
74
+ epilog="""
75
+ Examples:
76
+ # Quick test run
77
+ uv run sft-lfm2.5.py \\
78
+ --dataset mlabonne/FineTome-100k \\
79
+ --max-steps 50 \\
80
+ --output-repo username/lfm-test
81
+
82
+ # Full training with eval
83
+ uv run sft-lfm2.5.py \\
84
+ --dataset mlabonne/FineTome-100k \\
85
+ --num-epochs 1 \\
86
+ --eval-split 0.2 \\
87
+ --output-repo username/lfm-finetuned
88
+
89
+ # With Trackio monitoring
90
+ uv run sft-lfm2.5.py \\
91
+ --dataset mlabonne/FineTome-100k \\
92
+ --num-epochs 1 \\
93
+ --output-repo username/lfm-finetuned \\
94
+ --trackio-space username/trackio
95
+ """,
96
+ )
97
+
98
+ # Model and data
99
+ parser.add_argument(
100
+ "--base-model",
101
+ default="LiquidAI/LFM2.5-1.2B-Instruct",
102
+ help="Base model (default: LiquidAI/LFM2.5-1.2B-Instruct)",
103
+ )
104
+ parser.add_argument(
105
+ "--dataset",
106
+ required=True,
107
+ help="Dataset in ShareGPT/conversation format (e.g., mlabonne/FineTome-100k)",
108
+ )
109
+ parser.add_argument(
110
+ "--output-repo",
111
+ required=True,
112
+ help="HF Hub repo to push model to (e.g., 'username/lfm-finetuned')",
113
+ )
114
+
115
+ # Training config
116
+ parser.add_argument(
117
+ "--num-epochs",
118
+ type=float,
119
+ default=None,
120
+ help="Number of epochs (default: None). Use instead of --max-steps.",
121
+ )
122
+ parser.add_argument(
123
+ "--max-steps",
124
+ type=int,
125
+ default=None,
126
+ help="Training steps (default: None). Use for quick tests or streaming.",
127
+ )
128
+ parser.add_argument(
129
+ "--batch-size",
130
+ type=int,
131
+ default=2,
132
+ help="Per-device batch size (default: 2)",
133
+ )
134
+ parser.add_argument(
135
+ "--gradient-accumulation",
136
+ type=int,
137
+ default=4,
138
+ help="Gradient accumulation steps (default: 4). Effective batch = batch-size * this",
139
+ )
140
+ parser.add_argument(
141
+ "--learning-rate",
142
+ type=float,
143
+ default=2e-4,
144
+ help="Learning rate (default: 2e-4)",
145
+ )
146
+ parser.add_argument(
147
+ "--max-seq-length",
148
+ type=int,
149
+ default=2048,
150
+ help="Maximum sequence length (default: 2048)",
151
+ )
152
+
153
+ # LoRA config
154
+ parser.add_argument(
155
+ "--lora-r",
156
+ type=int,
157
+ default=16,
158
+ help="LoRA rank (default: 16). Higher = more capacity but more VRAM",
159
+ )
160
+ parser.add_argument(
161
+ "--lora-alpha",
162
+ type=int,
163
+ default=16,
164
+ help="LoRA alpha (default: 16). Same as r per Unsloth recommendation",
165
+ )
166
+
167
+ # Logging
168
+ parser.add_argument(
169
+ "--trackio-space",
170
+ default=None,
171
+ help="HF Space for Trackio dashboard (e.g., 'username/trackio')",
172
+ )
173
+ parser.add_argument(
174
+ "--run-name",
175
+ default=None,
176
+ help="Custom run name for Trackio (default: auto-generated)",
177
+ )
178
+ parser.add_argument(
179
+ "--save-local",
180
+ default="lfm-output",
181
+ help="Local directory to save model (default: lfm-output)",
182
+ )
183
+
184
+ # Evaluation and data control
185
+ parser.add_argument(
186
+ "--eval-split",
187
+ type=float,
188
+ default=0.0,
189
+ help="Fraction of data for evaluation (0.0-0.5). Default: 0.0 (no eval)",
190
+ )
191
+ parser.add_argument(
192
+ "--num-samples",
193
+ type=int,
194
+ default=None,
195
+ help="Limit samples (default: None = use all)",
196
+ )
197
+ parser.add_argument(
198
+ "--seed",
199
+ type=int,
200
+ default=3407,
201
+ help="Random seed for reproducibility (default: 3407)",
202
+ )
203
+ parser.add_argument(
204
+ "--merge-model",
205
+ action="store_true",
206
+ default=False,
207
+ help="Merge LoRA weights into base model before uploading (larger file, easier to use)",
208
+ )
209
+
210
+ return parser.parse_args()
211
+
212
+
213
+ def main():
214
+ args = parse_args()
215
+
216
+ # Validate epochs/steps configuration
217
+ if not args.num_epochs and not args.max_steps:
218
+ args.num_epochs = 1
219
+ logger.info("Using default --num-epochs=1")
220
+
221
+ # Determine training duration display
222
+ if args.num_epochs:
223
+ duration_str = f"{args.num_epochs} epoch(s)"
224
+ else:
225
+ duration_str = f"{args.max_steps} steps"
226
+
227
+ print("=" * 70)
228
+ print("LFM2.5-1.2B Fine-tuning with Unsloth")
229
+ print("=" * 70)
230
+ print("\nConfiguration:")
231
+ print(f" Base model: {args.base_model}")
232
+ print(f" Dataset: {args.dataset}")
233
+ print(f" Num samples: {args.num_samples or 'all'}")
234
+ print(
235
+ f" Eval split: {args.eval_split if args.eval_split > 0 else '(disabled)'}"
236
+ )
237
+ print(f" Seed: {args.seed}")
238
+ print(f" Training: {duration_str}")
239
+ print(
240
+ f" Batch size: {args.batch_size} x {args.gradient_accumulation} = {args.batch_size * args.gradient_accumulation}"
241
+ )
242
+ print(f" Learning rate: {args.learning_rate}")
243
+ print(f" LoRA rank: {args.lora_r}")
244
+ print(f" Max seq length: {args.max_seq_length}")
245
+ print(f" Output repo: {args.output_repo}")
246
+ print(f" Trackio space: {args.trackio_space or '(not configured)'}")
247
+ print()
248
+
249
+ # Check CUDA before heavy imports
250
+ check_cuda()
251
+
252
+ # Enable fast transfers
253
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
254
+
255
+ # Set Trackio space if provided
256
+ if args.trackio_space:
257
+ os.environ["TRACKIO_SPACE_ID"] = args.trackio_space
258
+ logger.info(
259
+ f"Trackio dashboard: https://huggingface.co/spaces/{args.trackio_space}"
260
+ )
261
+
262
+ # Import heavy dependencies
263
+ from unsloth import FastLanguageModel
264
+ from unsloth.chat_templates import standardize_data_formats, train_on_responses_only
265
+ from datasets import load_dataset
266
+ from trl import SFTTrainer, SFTConfig
267
+ from huggingface_hub import login
268
+
269
+ # Login to Hub
270
+ token = os.environ.get("HF_TOKEN") or os.environ.get("hfjob")
271
+ if token:
272
+ login(token=token)
273
+ logger.info("Logged in to Hugging Face Hub")
274
+ else:
275
+ logger.warning("HF_TOKEN not set - model upload may fail")
276
+
277
+ # 1. Load model
278
+ print("\n[1/5] Loading model...")
279
+ start = time.time()
280
+
281
+ model, tokenizer = FastLanguageModel.from_pretrained(
282
+ model_name=args.base_model,
283
+ max_seq_length=args.max_seq_length,
284
+ load_in_4bit=False,
285
+ load_in_8bit=False,
286
+ load_in_16bit=True,
287
+ full_finetuning=False,
288
+ )
289
+
290
+ # Add LoRA adapters with LFM-specific target modules
291
+ model = FastLanguageModel.get_peft_model(
292
+ model,
293
+ r=args.lora_r,
294
+ target_modules=[
295
+ "q_proj",
296
+ "k_proj",
297
+ "v_proj",
298
+ "out_proj",
299
+ "in_proj",
300
+ "w1",
301
+ "w2",
302
+ "w3",
303
+ ],
304
+ lora_alpha=args.lora_alpha,
305
+ lora_dropout=0,
306
+ bias="none",
307
+ use_gradient_checkpointing="unsloth",
308
+ random_state=args.seed,
309
+ use_rslora=False,
310
+ loftq_config=None,
311
+ )
312
+ print(f"Model loaded in {time.time() - start:.1f}s")
313
+
314
+ # 2. Load and prepare dataset
315
+ print("\n[2/5] Loading dataset...")
316
+ start = time.time()
317
+
318
+ dataset = load_dataset(args.dataset, split="train")
319
+ print(f" Dataset has {len(dataset)} total samples")
320
+
321
+ if args.num_samples:
322
+ dataset = dataset.select(range(min(args.num_samples, len(dataset))))
323
+ print(f" Limited to {len(dataset)} samples")
324
+
325
+ # Auto-detect and normalize conversation column
326
+ for col in ["messages", "conversations", "conversation"]:
327
+ if col in dataset.column_names and isinstance(dataset[0][col], list):
328
+ if col != "conversations":
329
+ dataset = dataset.rename_column(col, "conversations")
330
+ break
331
+ dataset = standardize_data_formats(dataset)
332
+
333
+ # Apply chat template
334
+ def formatting_prompts_func(examples):
335
+ texts = tokenizer.apply_chat_template(
336
+ examples["conversations"],
337
+ tokenize=False,
338
+ add_generation_prompt=False,
339
+ )
340
+ # Remove BOS token to avoid duplicates
341
+ return {"text": [x.removeprefix(tokenizer.bos_token) for x in texts]}
342
+
343
+ dataset = dataset.map(formatting_prompts_func, batched=True)
344
+
345
+ # Split for evaluation if requested
346
+ if args.eval_split > 0:
347
+ split = dataset.train_test_split(test_size=args.eval_split, seed=args.seed)
348
+ train_data = split["train"]
349
+ eval_data = split["test"]
350
+ print(f" Train: {len(train_data)} samples, Eval: {len(eval_data)} samples")
351
+ else:
352
+ train_data = dataset
353
+ eval_data = None
354
+
355
+ print(f" Dataset ready in {time.time() - start:.1f}s")
356
+
357
+ # 3. Configure trainer
358
+ print("\n[3/5] Configuring trainer...")
359
+
360
+ # Calculate steps per epoch for logging/eval intervals
361
+ effective_batch = args.batch_size * args.gradient_accumulation
362
+ num_samples = len(train_data)
363
+ steps_per_epoch = num_samples // effective_batch
364
+
365
+ # Determine run name and logging steps
366
+ if args.run_name:
367
+ run_name = args.run_name
368
+ elif args.num_epochs:
369
+ run_name = f"lfm2.5-sft-{args.num_epochs}ep"
370
+ else:
371
+ run_name = f"lfm2.5-sft-{args.max_steps}steps"
372
+
373
+ if args.num_epochs:
374
+ logging_steps = max(1, steps_per_epoch // 10)
375
+ save_steps = max(1, steps_per_epoch // 4)
376
+ else:
377
+ logging_steps = max(1, args.max_steps // 20)
378
+ save_steps = max(1, args.max_steps // 4)
379
+
380
+ # Determine reporting backend
381
+ if args.trackio_space:
382
+ report_to = ["tensorboard", "trackio"]
383
+ else:
384
+ report_to = ["tensorboard"]
385
+
386
+ training_config = SFTConfig(
387
+ output_dir=args.save_local,
388
+ dataset_text_field="text",
389
+ per_device_train_batch_size=args.batch_size,
390
+ gradient_accumulation_steps=args.gradient_accumulation,
391
+ warmup_steps=5,
392
+ num_train_epochs=args.num_epochs if args.num_epochs else 1,
393
+ max_steps=args.max_steps if args.max_steps else -1,
394
+ learning_rate=args.learning_rate,
395
+ logging_steps=logging_steps,
396
+ optim="adamw_8bit",
397
+ weight_decay=0.01,
398
+ lr_scheduler_type="linear",
399
+ seed=args.seed,
400
+ max_length=args.max_seq_length,
401
+ report_to=report_to,
402
+ run_name=run_name,
403
+ push_to_hub=True,
404
+ hub_model_id=args.output_repo,
405
+ save_steps=save_steps,
406
+ save_total_limit=3,
407
+ )
408
+
409
+ # Add evaluation config if eval is enabled
410
+ if eval_data:
411
+ if args.num_epochs:
412
+ training_config.eval_strategy = "epoch"
413
+ print(" Evaluation enabled: every epoch")
414
+ else:
415
+ training_config.eval_strategy = "steps"
416
+ training_config.eval_steps = max(1, args.max_steps // 5)
417
+ print(f" Evaluation enabled: every {training_config.eval_steps} steps")
418
+
419
+ trainer = SFTTrainer(
420
+ model=model,
421
+ tokenizer=tokenizer,
422
+ train_dataset=train_data,
423
+ eval_dataset=eval_data,
424
+ args=training_config,
425
+ )
426
+
427
+ # Train on responses only (mask user inputs)
428
+ trainer = train_on_responses_only(
429
+ trainer,
430
+ instruction_part="<|im_start|>user\n",
431
+ response_part="<|im_start|>assistant\n",
432
+ )
433
+
434
+ # 4. Train
435
+ print(f"\n[4/5] Training for {duration_str}...")
436
+ if args.num_epochs:
437
+ print(
438
+ f" (~{steps_per_epoch} steps/epoch, {int(steps_per_epoch * args.num_epochs)} total steps)"
439
+ )
440
+ start = time.time()
441
+
442
+ train_result = trainer.train()
443
+
444
+ train_time = time.time() - start
445
+ total_steps = train_result.metrics.get(
446
+ "train_steps", args.max_steps or steps_per_epoch * args.num_epochs
447
+ )
448
+ print(f"\nTraining completed in {train_time / 60:.1f} minutes")
449
+ print(f" Speed: {total_steps / train_time:.2f} steps/s")
450
+
451
+ # Print training metrics
452
+ train_loss = train_result.metrics.get("train_loss")
453
+ if train_loss:
454
+ print(f" Final train loss: {train_loss:.4f}")
455
+
456
+ # Print eval results if eval was enabled
457
+ if eval_data:
458
+ print("\nRunning final evaluation...")
459
+ try:
460
+ eval_results = trainer.evaluate()
461
+ eval_loss = eval_results.get("eval_loss")
462
+ if eval_loss:
463
+ print(f" Final eval loss: {eval_loss:.4f}")
464
+ if train_loss:
465
+ ratio = eval_loss / train_loss
466
+ if ratio > 1.5:
467
+ print(
468
+ f" Warning: Eval loss is {ratio:.1f}x train loss - possible overfitting"
469
+ )
470
+ else:
471
+ print(
472
+ f" Eval/train ratio: {ratio:.2f} - model generalizes well"
473
+ )
474
+ except Exception as e:
475
+ print(f" Warning: Final evaluation failed: {e}")
476
+ print(" Continuing to save model...")
477
+
478
+ # 5. Save and push
479
+ print("\n[5/5] Saving model...")
480
+
481
+ if args.merge_model:
482
+ print("Merging LoRA weights into base model...")
483
+ print(f"\nPushing merged model to {args.output_repo}...")
484
+ model.push_to_hub_merged(
485
+ args.output_repo,
486
+ tokenizer=tokenizer,
487
+ save_method="merged_16bit",
488
+ )
489
+ print(f"Merged model available at: https://huggingface.co/{args.output_repo}")
490
+ else:
491
+ model.save_pretrained(args.save_local)
492
+ tokenizer.save_pretrained(args.save_local)
493
+ print(f"Saved locally to {args.save_local}/")
494
+
495
+ print(f"\nPushing adapter to {args.output_repo}...")
496
+ model.push_to_hub(args.output_repo, tokenizer=tokenizer)
497
+ print(f"Adapter available at: https://huggingface.co/{args.output_repo}")
498
+
499
+ # Update model card metadata with dataset info
500
+ from huggingface_hub import metadata_update
501
+
502
+ metadata_update(args.output_repo, {"datasets": [args.dataset]}, overwrite=True)
503
+ print(f" Model card updated with dataset: {args.dataset}")
504
+
505
+ print("\n" + "=" * 70)
506
+ print("Done!")
507
+ print("=" * 70)
508
+
509
+
510
+ if __name__ == "__main__":
511
+ if len(sys.argv) == 1:
512
+ print("=" * 70)
513
+ print("LFM2.5-1.2B Fine-tuning with Unsloth")
514
+ print("=" * 70)
515
+ print("\nFine-tune Liquid Foundation Model with optional train/eval split.")
516
+ print("\nFeatures:")
517
+ print(" - ~60% less VRAM with Unsloth optimizations")
518
+ print(" - 2x faster training vs standard methods")
519
+ print(" - Epoch-based or step-based training")
520
+ print(" - Optional evaluation to detect overfitting")
521
+ print(" - Trains only on assistant responses (masked user inputs)")
522
+ print("\nEpoch-based training:")
523
+ print("\n uv run sft-lfm2.5.py \\")
524
+ print(" --dataset mlabonne/FineTome-100k \\")
525
+ print(" --num-epochs 1 \\")
526
+ print(" --eval-split 0.2 \\")
527
+ print(" --output-repo your-username/lfm-finetuned")
528
+ print("\nHF Jobs example:")
529
+ print("\n hf jobs uv run sft-lfm2.5.py \\")
530
+ print(" --flavor a10g-small --secrets HF_TOKEN --timeout 4h \\")
531
+ print(" -- --dataset mlabonne/FineTome-100k \\")
532
+ print(" --num-epochs 1 \\")
533
+ print(" --eval-split 0.2 \\")
534
+ print(" --output-repo your-username/lfm-finetuned")
535
+ print("\nFor full help: uv run sft-lfm2.5.py --help")
536
+ print("=" * 70)
537
+ sys.exit(0)
538
+
539
+ main()
sft-qwen3-vl.py ADDED
@@ -0,0 +1,596 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.10"
3
+ # dependencies = [
4
+ # "unsloth",
5
+ # "datasets",
6
+ # "trl==0.22.2",
7
+ # "huggingface_hub[hf_transfer]",
8
+ # "trackio",
9
+ # "tensorboard",
10
+ # "transformers==4.57.1",
11
+ # ]
12
+ # ///
13
+ """
14
+ Fine-tune Qwen3-VL-8B Vision Language Model using Unsloth optimizations.
15
+
16
+ Uses Unsloth for ~60% less VRAM and 2x faster training.
17
+ Supports epoch-based or step-based training with optional eval split.
18
+
19
+ Epoch-based training (recommended for full datasets):
20
+ uv run sft-qwen3-vl.py \
21
+ --num-epochs 1 \
22
+ --eval-split 0.2 \
23
+ --output-repo your-username/vlm-finetuned
24
+
25
+ Run on HF Jobs (1 epoch with eval):
26
+ hf jobs uv run \
27
+ https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-qwen3-vl.py \
28
+ --flavor a100-large --secrets HF_TOKEN --timeout 4h \
29
+ -- --num-epochs 1 --eval-split 0.2 --output-repo your-username/vlm-finetuned
30
+
31
+ Step-based training (for streaming or quick tests):
32
+ uv run sft-qwen3-vl.py \
33
+ --streaming \
34
+ --max-steps 500 \
35
+ --output-repo your-username/vlm-finetuned
36
+
37
+ Quick test with limited samples:
38
+ uv run sft-qwen3-vl.py \
39
+ --num-samples 500 \
40
+ --num-epochs 2 \
41
+ --eval-split 0.2 \
42
+ --output-repo your-username/vlm-test
43
+ """
44
+
45
+ import argparse
46
+ import logging
47
+ import os
48
+ import sys
49
+ import time
50
+
51
+ # Force unbuffered output for HF Jobs logs
52
+ sys.stdout.reconfigure(line_buffering=True)
53
+ sys.stderr.reconfigure(line_buffering=True)
54
+
55
+ logging.basicConfig(
56
+ level=logging.INFO,
57
+ format="%(asctime)s - %(levelname)s - %(message)s",
58
+ )
59
+ logger = logging.getLogger(__name__)
60
+
61
+
62
+ def check_cuda():
63
+ """Check CUDA availability and exit if not available."""
64
+ import torch
65
+
66
+ if not torch.cuda.is_available():
67
+ logger.error("CUDA is not available. This script requires a GPU.")
68
+ logger.error("Run on a machine with a CUDA-capable GPU or use HF Jobs:")
69
+ logger.error(
70
+ " hf jobs uv run https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-qwen3-vl.py --flavor a100-large ..."
71
+ )
72
+ sys.exit(1)
73
+ logger.info(f"CUDA available: {torch.cuda.get_device_name(0)}")
74
+
75
+
76
+ def parse_args():
77
+ parser = argparse.ArgumentParser(
78
+ description="Fine-tune Qwen3-VL-8B with streaming datasets using Unsloth",
79
+ formatter_class=argparse.RawDescriptionHelpFormatter,
80
+ epilog="""
81
+ Examples:
82
+ # Quick test run
83
+ uv run sft-qwen3-vl.py \\
84
+ --max-steps 50 \\
85
+ --output-repo username/vlm-test
86
+
87
+ # Full training with Trackio monitoring
88
+ uv run sft-qwen3-vl.py \\
89
+ --max-steps 500 \\
90
+ --output-repo username/vlm-finetuned \\
91
+ --trackio-space username/trackio
92
+
93
+ # Custom dataset and model
94
+ uv run sft-qwen3-vl.py \\
95
+ --base-model unsloth/Qwen3-VL-8B-Instruct-unsloth-bnb-4bit \\
96
+ --dataset your-username/your-vlm-dataset \\
97
+ --max-steps 1000 \\
98
+ --output-repo username/custom-vlm
99
+ """,
100
+ )
101
+
102
+ # Model and data
103
+ parser.add_argument(
104
+ "--base-model",
105
+ default="unsloth/Qwen3-VL-8B-Instruct-unsloth-bnb-4bit",
106
+ help="Base VLM model (default: unsloth/Qwen3-VL-8B-Instruct-unsloth-bnb-4bit)",
107
+ )
108
+ parser.add_argument(
109
+ "--dataset",
110
+ default="davanstrien/iconclass-vlm-sft",
111
+ help="Dataset with 'images' and 'messages' columns (default: davanstrien/iconclass-vlm-sft)",
112
+ )
113
+ parser.add_argument(
114
+ "--output-repo",
115
+ required=True,
116
+ help="HF Hub repo to push model to (e.g., 'username/vlm-finetuned')",
117
+ )
118
+
119
+ # Training config
120
+ parser.add_argument(
121
+ "--num-epochs",
122
+ type=float,
123
+ default=None,
124
+ help="Number of epochs (default: None). Use instead of --max-steps for non-streaming mode.",
125
+ )
126
+ parser.add_argument(
127
+ "--max-steps",
128
+ type=int,
129
+ default=None,
130
+ help="Training steps (default: None). Required for streaming mode, optional otherwise.",
131
+ )
132
+ parser.add_argument(
133
+ "--batch-size",
134
+ type=int,
135
+ default=2,
136
+ help="Per-device batch size (default: 2)",
137
+ )
138
+ parser.add_argument(
139
+ "--gradient-accumulation",
140
+ type=int,
141
+ default=4,
142
+ help="Gradient accumulation steps (default: 4). Effective batch = batch-size * this",
143
+ )
144
+ parser.add_argument(
145
+ "--learning-rate",
146
+ type=float,
147
+ default=2e-4,
148
+ help="Learning rate (default: 2e-4)",
149
+ )
150
+ parser.add_argument(
151
+ "--max-seq-length",
152
+ type=int,
153
+ default=2048,
154
+ help="Maximum sequence length (default: 2048)",
155
+ )
156
+
157
+ # LoRA config
158
+ parser.add_argument(
159
+ "--lora-r",
160
+ type=int,
161
+ default=16,
162
+ help="LoRA rank (default: 16). Higher = more capacity but more VRAM",
163
+ )
164
+ parser.add_argument(
165
+ "--lora-alpha",
166
+ type=int,
167
+ default=16,
168
+ help="LoRA alpha (default: 16). Same as r per Unsloth notebook",
169
+ )
170
+
171
+ # Logging
172
+ parser.add_argument(
173
+ "--trackio-space",
174
+ default=None,
175
+ help="HF Space for Trackio dashboard (e.g., 'username/trackio')",
176
+ )
177
+ parser.add_argument(
178
+ "--run-name",
179
+ default=None,
180
+ help="Custom run name for Trackio (default: auto-generated from steps/epochs)",
181
+ )
182
+ parser.add_argument(
183
+ "--save-local",
184
+ default="vlm-qwen3-output",
185
+ help="Local directory to save model (default: vlm-qwen3-output)",
186
+ )
187
+
188
+ # Evaluation and data control
189
+ parser.add_argument(
190
+ "--eval-split",
191
+ type=float,
192
+ default=0.0,
193
+ help="Fraction of data for evaluation (0.0-0.5). Default: 0.0 (no eval)",
194
+ )
195
+ parser.add_argument(
196
+ "--num-samples",
197
+ type=int,
198
+ default=None,
199
+ help="Limit samples (default: None = use all for non-streaming, 500 for streaming)",
200
+ )
201
+ parser.add_argument(
202
+ "--seed",
203
+ type=int,
204
+ default=3407,
205
+ help="Random seed for reproducibility (default: 3407)",
206
+ )
207
+ parser.add_argument(
208
+ "--streaming",
209
+ action="store_true",
210
+ default=False,
211
+ help="Use streaming mode (default: False). Use for very large datasets.",
212
+ )
213
+ parser.add_argument(
214
+ "--merge-model",
215
+ action="store_true",
216
+ default=False,
217
+ help="Merge LoRA weights into base model before uploading (larger file, easier to use)",
218
+ )
219
+
220
+ return parser.parse_args()
221
+
222
+
223
+ def main():
224
+ args = parse_args()
225
+
226
+ # Validate epochs/steps configuration
227
+ if args.streaming and args.num_epochs:
228
+ logger.error(
229
+ "Cannot use --num-epochs with --streaming. Use --max-steps instead."
230
+ )
231
+ sys.exit(1)
232
+ if args.streaming and not args.max_steps:
233
+ args.max_steps = 500 # Default for streaming
234
+ logger.info("Using default --max-steps=500 for streaming mode")
235
+ if not args.streaming and not args.num_epochs and not args.max_steps:
236
+ args.num_epochs = 1 # Default to 1 epoch for non-streaming
237
+ logger.info("Using default --num-epochs=1 for non-streaming mode")
238
+
239
+ # Determine training duration display
240
+ if args.num_epochs:
241
+ duration_str = f"{args.num_epochs} epoch(s)"
242
+ else:
243
+ duration_str = f"{args.max_steps} steps"
244
+
245
+ print("=" * 70)
246
+ print("Qwen3-VL-8B Fine-tuning with Unsloth")
247
+ print("=" * 70)
248
+ print("\nConfiguration:")
249
+ print(f" Base model: {args.base_model}")
250
+ print(f" Dataset: {args.dataset}")
251
+ print(f" Streaming: {args.streaming}")
252
+ print(
253
+ f" Num samples: {args.num_samples or ('500' if args.streaming else 'all')}"
254
+ )
255
+ print(
256
+ f" Eval split: {args.eval_split if args.eval_split > 0 else '(disabled)'}"
257
+ )
258
+ print(f" Seed: {args.seed}")
259
+ print(f" Training: {duration_str}")
260
+ print(
261
+ f" Batch size: {args.batch_size} x {args.gradient_accumulation} = {args.batch_size * args.gradient_accumulation}"
262
+ )
263
+ print(f" Learning rate: {args.learning_rate}")
264
+ print(f" LoRA rank: {args.lora_r}")
265
+ print(f" Output repo: {args.output_repo}")
266
+ print(f" Trackio space: {args.trackio_space or '(not configured)'}")
267
+ print()
268
+
269
+ # Check CUDA before heavy imports
270
+ check_cuda()
271
+
272
+ # Enable fast transfers
273
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
274
+
275
+ # Set Trackio space if provided
276
+ if args.trackio_space:
277
+ os.environ["TRACKIO_SPACE_ID"] = args.trackio_space
278
+ logger.info(
279
+ f"Trackio dashboard: https://huggingface.co/spaces/{args.trackio_space}"
280
+ )
281
+
282
+ # Import heavy dependencies (note: import from unsloth.trainer for VLM)
283
+ from unsloth import FastVisionModel
284
+ from unsloth.trainer import UnslothVisionDataCollator
285
+ from datasets import load_dataset
286
+ from trl import SFTTrainer, SFTConfig
287
+ from huggingface_hub import login
288
+
289
+ # Login to Hub
290
+ token = os.environ.get("HF_TOKEN")
291
+ if token:
292
+ login(token=token)
293
+ logger.info("Logged in to Hugging Face Hub")
294
+ else:
295
+ logger.warning("HF_TOKEN not set - model upload may fail")
296
+
297
+ # 1. Load model (Qwen returns tokenizer, not processor)
298
+ print("\n[1/5] Loading model...")
299
+ start = time.time()
300
+
301
+ model, tokenizer = FastVisionModel.from_pretrained(
302
+ args.base_model,
303
+ load_in_4bit=True,
304
+ use_gradient_checkpointing="unsloth",
305
+ )
306
+
307
+ model = FastVisionModel.get_peft_model(
308
+ model,
309
+ finetune_vision_layers=True,
310
+ finetune_language_layers=True,
311
+ finetune_attention_modules=True,
312
+ finetune_mlp_modules=True,
313
+ r=args.lora_r,
314
+ lora_alpha=args.lora_alpha,
315
+ lora_dropout=0,
316
+ bias="none",
317
+ random_state=3407,
318
+ use_rslora=False,
319
+ loftq_config=None,
320
+ )
321
+ print(f"Model loaded in {time.time() - start:.1f}s")
322
+
323
+ # 2. Load dataset (streaming or non-streaming)
324
+ print(
325
+ f"\n[2/5] Loading dataset ({'streaming' if args.streaming else 'non-streaming'})..."
326
+ )
327
+ start = time.time()
328
+
329
+ if args.streaming:
330
+ # Streaming mode: take limited samples
331
+ dataset = load_dataset(args.dataset, split="train", streaming=True)
332
+ num_samples = args.num_samples or 500
333
+
334
+ # Peek at first sample to show info
335
+ sample = next(iter(dataset))
336
+ if "messages" in sample:
337
+ print(f" Sample has {len(sample['messages'])} messages")
338
+ if "images" in sample:
339
+ img_count = (
340
+ len(sample["images"]) if isinstance(sample["images"], list) else 1
341
+ )
342
+ print(f" Sample has {img_count} image(s)")
343
+
344
+ # Reload and take samples
345
+ dataset = load_dataset(args.dataset, split="train", streaming=True)
346
+ all_data = list(dataset.take(num_samples))
347
+ print(f" Loaded {len(all_data)} samples in {time.time() - start:.1f}s")
348
+
349
+ if args.eval_split > 0:
350
+ # Manual shuffle for streaming (no built-in split)
351
+ import random
352
+
353
+ random.seed(args.seed)
354
+ random.shuffle(all_data)
355
+ split_idx = int(len(all_data) * (1 - args.eval_split))
356
+ train_data = all_data[:split_idx]
357
+ eval_data = all_data[split_idx:]
358
+ print(f" Train: {len(train_data)} samples, Eval: {len(eval_data)} samples")
359
+ else:
360
+ train_data = all_data
361
+ eval_data = None
362
+ else:
363
+ # Non-streaming: use proper train_test_split
364
+ dataset = load_dataset(args.dataset, split="train")
365
+ print(f" Dataset has {len(dataset)} total samples")
366
+
367
+ # Peek at first sample
368
+ sample = dataset[0]
369
+ if "messages" in sample:
370
+ print(f" Sample has {len(sample['messages'])} messages")
371
+ if "images" in sample:
372
+ img_count = (
373
+ len(sample["images"]) if isinstance(sample["images"], list) else 1
374
+ )
375
+ print(f" Sample has {img_count} image(s)")
376
+
377
+ if args.num_samples:
378
+ dataset = dataset.select(range(min(args.num_samples, len(dataset))))
379
+ print(f" Limited to {len(dataset)} samples")
380
+
381
+ if args.eval_split > 0:
382
+ split = dataset.train_test_split(test_size=args.eval_split, seed=args.seed)
383
+ train_data = split["train"]
384
+ eval_data = split["test"]
385
+ print(f" Train: {len(train_data)} samples, Eval: {len(eval_data)} samples")
386
+ else:
387
+ train_data = dataset
388
+ eval_data = None
389
+
390
+ print(f" Dataset ready in {time.time() - start:.1f}s")
391
+
392
+ # 3. Configure trainer
393
+ print("\n[3/5] Configuring trainer...")
394
+
395
+ # Enable training mode
396
+ FastVisionModel.for_training(model)
397
+
398
+ # Calculate steps per epoch for logging/eval intervals
399
+ effective_batch = args.batch_size * args.gradient_accumulation
400
+ num_samples = (
401
+ len(train_data) if hasattr(train_data, "__len__") else (args.num_samples or 500)
402
+ )
403
+ steps_per_epoch = num_samples // effective_batch
404
+
405
+ # Determine run name and logging steps
406
+ if args.run_name:
407
+ run_name = args.run_name
408
+ elif args.num_epochs:
409
+ run_name = f"qwen3-vl-sft-{args.num_epochs}ep"
410
+ else:
411
+ run_name = f"qwen3-vl-sft-{args.max_steps}steps"
412
+
413
+ if args.num_epochs:
414
+ logging_steps = max(1, steps_per_epoch // 10) # ~10 logs per epoch
415
+ save_steps = max(1, steps_per_epoch // 4) # ~4 saves per epoch
416
+ else:
417
+ logging_steps = max(1, args.max_steps // 20)
418
+ save_steps = max(1, args.max_steps // 4) # ~4 saves per run
419
+
420
+ # Determine reporting backend
421
+ if args.trackio_space:
422
+ report_to = ["tensorboard", "trackio"]
423
+ else:
424
+ report_to = ["tensorboard"]
425
+
426
+ training_config = SFTConfig(
427
+ output_dir=args.save_local,
428
+ per_device_train_batch_size=args.batch_size,
429
+ gradient_accumulation_steps=args.gradient_accumulation,
430
+ warmup_steps=5, # Per notebook (not warmup_ratio)
431
+ num_train_epochs=args.num_epochs if args.num_epochs else 1,
432
+ max_steps=args.max_steps if args.max_steps else -1, # -1 means use epochs
433
+ learning_rate=args.learning_rate,
434
+ logging_steps=logging_steps,
435
+ optim="adamw_8bit", # Per notebook
436
+ weight_decay=0.001,
437
+ lr_scheduler_type="cosine" if args.num_epochs else "linear",
438
+ seed=args.seed,
439
+ # VLM-specific settings (required for Unsloth)
440
+ remove_unused_columns=False,
441
+ dataset_text_field="",
442
+ dataset_kwargs={"skip_prepare_dataset": True},
443
+ max_length=args.max_seq_length,
444
+ # Logging - always use tensorboard for reliable logs
445
+ report_to=report_to,
446
+ run_name=run_name,
447
+ # Push checkpoints to Hub frequently
448
+ push_to_hub=True,
449
+ hub_model_id=args.output_repo,
450
+ save_steps=save_steps,
451
+ save_total_limit=3, # Keep last 3 checkpoints
452
+ )
453
+
454
+ # Add evaluation config if eval is enabled
455
+ if eval_data:
456
+ if args.num_epochs:
457
+ # For epoch-based training, eval at end of each epoch
458
+ training_config.eval_strategy = "epoch"
459
+ print(" Evaluation enabled: every epoch")
460
+ else:
461
+ training_config.eval_strategy = "steps"
462
+ training_config.eval_steps = max(1, args.max_steps // 5)
463
+ print(f" Evaluation enabled: every {training_config.eval_steps} steps")
464
+
465
+ # Use older 'tokenizer=' parameter (not processing_class) - required for Unsloth VLM
466
+ trainer = SFTTrainer(
467
+ model=model,
468
+ tokenizer=tokenizer, # Full processor, not processor.tokenizer
469
+ data_collator=UnslothVisionDataCollator(
470
+ model,
471
+ tokenizer,
472
+ train_on_responses_only=True,
473
+ instruction_part="<|im_start|>user\n",
474
+ response_part="<|im_start|>assistant\n",
475
+ ),
476
+ train_dataset=train_data,
477
+ eval_dataset=eval_data, # None if no eval
478
+ args=training_config,
479
+ )
480
+
481
+ # 4. Train
482
+ print(f"\n[4/5] Training for {duration_str}...")
483
+ if args.num_epochs:
484
+ print(
485
+ f" (~{steps_per_epoch} steps/epoch, {int(steps_per_epoch * args.num_epochs)} total steps)"
486
+ )
487
+ start = time.time()
488
+
489
+ train_result = trainer.train()
490
+
491
+ train_time = time.time() - start
492
+ total_steps = train_result.metrics.get(
493
+ "train_steps", args.max_steps or steps_per_epoch * args.num_epochs
494
+ )
495
+ print(f"\nTraining completed in {train_time / 60:.1f} minutes")
496
+ print(f" Speed: {total_steps / train_time:.2f} steps/s")
497
+
498
+ # Print training metrics
499
+ if train_result.metrics:
500
+ train_loss = train_result.metrics.get("train_loss")
501
+ if train_loss:
502
+ print(f" Final train loss: {train_loss:.4f}")
503
+
504
+ # Print eval results if eval was enabled
505
+ if eval_data:
506
+ print("\nRunning final evaluation...")
507
+ try:
508
+ eval_results = trainer.evaluate()
509
+ eval_loss = eval_results.get("eval_loss")
510
+ if eval_loss:
511
+ print(f" Final eval loss: {eval_loss:.4f}")
512
+ if train_loss:
513
+ ratio = eval_loss / train_loss
514
+ if ratio > 1.5:
515
+ print(
516
+ f" Warning: Eval loss is {ratio:.1f}x train loss - possible overfitting"
517
+ )
518
+ else:
519
+ print(
520
+ f" Eval/train ratio: {ratio:.2f} - model generalizes well"
521
+ )
522
+ except Exception as e:
523
+ print(f" Warning: Final evaluation failed: {e}")
524
+ print(" Continuing to save model...")
525
+
526
+ # 5. Save and push
527
+ print("\n[5/5] Saving model...")
528
+
529
+ if args.merge_model:
530
+ # Merge LoRA weights and push full model
531
+ print("Merging LoRA weights into base model...")
532
+ print(f"\nPushing merged model to {args.output_repo}...")
533
+ model.push_to_hub_merged(
534
+ args.output_repo,
535
+ tokenizer=tokenizer,
536
+ save_method="merged_16bit",
537
+ )
538
+ print(f"Merged model available at: https://huggingface.co/{args.output_repo}")
539
+ else:
540
+ # Save adapter only (smaller, requires base model to use)
541
+ model.save_pretrained(args.save_local)
542
+ tokenizer.save_pretrained(args.save_local)
543
+ print(f"Saved locally to {args.save_local}/")
544
+
545
+ print(f"\nPushing adapter to {args.output_repo}...")
546
+ model.push_to_hub(args.output_repo, tokenizer=tokenizer)
547
+ print(f"Adapter available at: https://huggingface.co/{args.output_repo}")
548
+
549
+ # Update model card metadata with dataset info
550
+ from huggingface_hub import metadata_update
551
+
552
+ metadata_update(args.output_repo, {"datasets": [args.dataset]}, overwrite=True)
553
+ print(f" Model card updated with dataset: {args.dataset}")
554
+
555
+ print("\n" + "=" * 70)
556
+ print("Done!")
557
+ print("=" * 70)
558
+
559
+
560
+ if __name__ == "__main__":
561
+ # Show example usage if no arguments
562
+ if len(sys.argv) == 1:
563
+ print("=" * 70)
564
+ print("Qwen3-VL-8B Fine-tuning with Unsloth")
565
+ print("=" * 70)
566
+ print("\nFine-tune Vision-Language Models with optional train/eval split.")
567
+ print("\nFeatures:")
568
+ print(" - ~60% less VRAM with Unsloth optimizations")
569
+ print(" - 2x faster training vs standard methods")
570
+ print(" - Epoch-based or step-based training")
571
+ print(" - Optional evaluation to detect overfitting")
572
+ print(" - Trackio integration for monitoring")
573
+ print("\nEpoch-based training (recommended for full datasets):")
574
+ print("\n uv run sft-qwen3-vl.py \\")
575
+ print(" --num-epochs 1 \\")
576
+ print(" --eval-split 0.2 \\")
577
+ print(" --output-repo your-username/vlm-finetuned")
578
+ print("\nHF Jobs example (1 epoch with eval):")
579
+ print("\n hf jobs uv run \\")
580
+ print(
581
+ " https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-qwen3-vl.py \\"
582
+ )
583
+ print(" --flavor a100-large --secrets HF_TOKEN --timeout 4h \\")
584
+ print(
585
+ " -- --num-epochs 1 --eval-split 0.2 --output-repo your-username/vlm-finetuned"
586
+ )
587
+ print("\nStep-based training (for streaming or quick tests):")
588
+ print("\n uv run sft-qwen3-vl.py \\")
589
+ print(" --streaming \\")
590
+ print(" --max-steps 500 \\")
591
+ print(" --output-repo your-username/vlm-finetuned")
592
+ print("\nFor full help: uv run sft-qwen3-vl.py --help")
593
+ print("=" * 70)
594
+ sys.exit(0)
595
+
596
+ main()