| | --- |
| | frameworks: |
| | - Pytorch |
| | license: apache-2.0 |
| | tasks: |
| | - image-text-to-text |
| |
|
| | model-type: |
| | |
| | - qwen |
| |
|
| | domain: |
| | |
| | - multi-modal |
| |
|
| | language: |
| | |
| | - en |
| |
|
| | base_model: |
| | - Qwen/Qwen3-8B |
| | - Qwen/Qwen2.5-VL-7B-Instruct |
| | |
| | |
| | |
| |
|
| | |
| | |
| | |
| |
|
| | |
| | |
| | |
| | --- |
| | |
| | Simple-VL-8B is a vision-language (VL) model trained by integrating the language modeling capabilities of Qwen3-8B with the visual understanding architecture of Qwen2.5-VL-7B-Instruct . |
| |
|
| | The model is trained under [ms-swift](https://github.com/modelscope/ms-swift/tree/main) framework, the SOP process document can be found [here](https://swift.readthedocs.io/en/latest/BestPractices/Rapidly-Training-VL-model.html) |
| |
|
| | Base Models : |
| | - [Qwen2.5-VL-7B-Instruct](https://www.modelscope.cn/models/Qwen/Qwen2.5-VL-7B-Instruct) |
| | - [Qwen3-8B](https://www.modelscope.cn/models/Qwen/Qwen3-8B) |
| |
|
| | The Simple-VL-8B model was created through a two-stage fine-tuning process: |
| |
|
| | 1. Architecture Modification : The original Qwen2.5-VL-7B-Instruct model's LLM component was replaced with weights from Qwen3-8B. Several key parameters in the configuration were updated to match Qwen3-8B's structure. |
| | 2. Two-Stage Training : |
| | 1. Stage 1 : Only the vision-to-language aligner (merger layer) was trained while keeping the ViT and LLM components frozen. |
| | 2. Stage 2 : All components were unfrozen and jointly fine-tuned to enhance overall performance. |
| |
|
| | Here we show a code snippet to show you how to use the chat model |
| |
|
| | ```python |
| | from modelscope import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor |
| | from qwen_vl_utils import process_vision_info |
| | |
| | # default: Load the model on the available device(s) |
| | model = Qwen2_5_VLForConditionalGeneration.from_pretrained( |
| | "swift/Simple-VL-8B", torch_dtype="auto", device_map="auto" |
| | ) |
| | |
| | # default processer |
| | processor = AutoProcessor.from_pretrained("swift/Simple-VL-8B") |
| | |
| | messages = [ |
| | { |
| | "role": "user", |
| | "content": [ |
| | { |
| | "type": "image", |
| | "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg", |
| | }, |
| | {"type": "text", "text": "Describe this image."}, |
| | ], |
| | } |
| | ] |
| | |
| | # Preparation for inference |
| | text = processor.apply_chat_template( |
| | messages, tokenize=False, add_generation_prompt=True |
| | ) |
| | image_inputs, video_inputs = process_vision_info(messages) |
| | inputs = processor( |
| | text=[text], |
| | images=image_inputs, |
| | videos=video_inputs, |
| | padding=True, |
| | return_tensors="pt", |
| | ) |
| | inputs = inputs.to("cuda") |
| | |
| | # Inference: Generation of the output |
| | generated_ids = model.generate(**inputs, max_new_tokens=128) |
| | generated_ids_trimmed = [ |
| | out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) |
| | ] |
| | output_text = processor.batch_decode( |
| | generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False |
| | ) |
| | print(output_text) |
| | |
| | ``` |
| |
|