| --- |
| license: apache-2.0 |
| language: |
| - en |
| base_model: |
| - Qwen/Qwen2-VL-2B-Instruct |
| pipeline_tag: image-text-to-text |
| library_name: transformers |
| tags: |
| - reasoner |
| - r1 |
| - exp |
| - diagram |
| - math |
| - theorem |
| - text-generation-inference |
| --- |
|  |
|
|
| > [!WARNING] |
| > **Note:** This model contains artifacts and may perform poorly in some cases. |
|
|
| # **Open-R1-Mini-Experimental** |
|
|
| The **Open-R1-Mini-Experimental** model is a fine-tuned version of Qwen2-VL-2B-Instruct, specifically designed for reasoning tasks, context reasoning, and multi-modal understanding based on the **R1 reasoning logits data**. This model integrates a conversational approach with deep reasoning capabilities to handle complex multi-modal tasks efficiently. |
|
|
| # **Key Enhancements** |
|
|
| * **Advanced Contextual Reasoning**: Open-R1-Mini-Experimental achieves state-of-the-art performance in reasoning tasks by leveraging R1 reasoning logits data, enhancing logical inference and decision-making. |
|
|
| * **Understanding images of various resolution & ratio**: The model excels at visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. |
|
|
| * **Long-Context Video Understanding**: Capable of processing and reasoning over videos of 20 minutes or more for high-quality video-based question answering, content creation, and dialogue. |
|
|
| * **Device Integration**: With strong reasoning and decision-making abilities, the model can be integrated into mobile devices, robots, and automation systems for real-time operation based on both visual and textual input. |
|
|
| * **Multilingual Support**: Supports text understanding in various languages within images, including English, Chinese, Japanese, Korean, Arabic, most European languages, and Vietnamese. |
|
|
| # **Sample Inference** |
|
|
| | Example | Image | |
| |---------|-------| |
| | **Example 1** |  | |
| | **Example 2** |  | |
| | **Example 3** |  | |
| | **Example 4** |  | |
| | **Example 5** |  | |
|
|
| **Demo:** https://huggingface.co/prithivMLmods/Open-R1-Mini-Experimental/blob/main/open-r1-reasoner-doc-py/open-r1-exp.ipynb |
|
|
| # **How to Use** |
|
|
| ```python |
| |
| instruction = "Analyze the provided image and the associated problem statement. Carefully consider the geometric relationships and mathematical principles involved. Provide a step-by-step solution to the problem, ensuring that each step is logically derived from the previous one. Conclude with the correct answer, clearly labeled." |
| |
| ``` |
|
|
| ```python |
| from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor |
| from qwen_vl_utils import process_vision_info |
| |
| # Load the model with automatic device placement |
| model = Qwen2VLForConditionalGeneration.from_pretrained( |
| "prithivMLmods/Open-R1-Mini-Experimental", torch_dtype="auto", device_map="auto" |
| ) |
| |
| # Recommended: Enable flash_attention_2 for better performance in multi-image and video tasks |
| # model = Qwen2VLForConditionalGeneration.from_pretrained( |
| # "prithivMLmods/Open-R1-Mini-Experimental", |
| # torch_dtype=torch.bfloat16, |
| # attn_implementation="flash_attention_2", |
| # device_map="auto", |
| # ) |
| |
| # Load processor |
| processor = AutoProcessor.from_pretrained("prithivMLmods/Open-R1-Mini-Experimental") |
| |
| # Adjust visual token range for optimized memory usage |
| # min_pixels = 256*28*28 |
| # max_pixels = 1280*28*28 |
| # processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-2B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels) |
| |
| messages = [ |
| { |
| "role": "user", |
| "content": [ |
| { |
| "type": "image", |
| "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg", |
| }, |
| {"type": "text", "text": "Analyze the context of this image."}, |
| ], |
| } |
| ] |
| |
| # Prepare input |
| text = processor.apply_chat_template( |
| messages, tokenize=False, add_generation_prompt=True |
| ) |
| image_inputs, video_inputs = process_vision_info(messages) |
| inputs = processor( |
| text=[text], |
| images=image_inputs, |
| videos=video_inputs, |
| padding=True, |
| return_tensors="pt", |
| ) |
| inputs = inputs.to("cuda") |
| |
| # Inference |
| generated_ids = model.generate(**inputs, max_new_tokens=128) |
| generated_ids_trimmed = [ |
| out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) |
| ] |
| output_text = processor.batch_decode( |
| generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False |
| ) |
| print(output_text) |
| ``` |
| # **Buffer Handling** |
| ```python |
| buffer = "" |
| for new_text in streamer: |
| buffer += new_text |
| buffer = buffer.replace("<|im_end|>", "") |
| yield buffer |
| ``` |
| # **Key Features** |
|
|
| 1. **Advanced Contextual Reasoning:** |
| - Optimized for **context-aware problem-solving** and **logical inference** based on R1 reasoning logits. |
|
|
| 2. **Optical Character Recognition (OCR):** |
| - Extracts and processes text from images with exceptional accuracy. |
|
|
| 3. **Mathematical and Logical Problem Solving:** |
| - Supports complex reasoning and outputs equations in **LaTeX format**. |
|
|
| 4. **Conversational and Multi-Turn Interaction:** |
| - Handles **multi-turn dialogue** with enhanced memory retention and response coherence. |
|
|
| 5. **Multi-Modal Inputs & Outputs:** |
| - Processes images, text, and combined inputs to generate insightful analyses. |
|
|
| 6. **Secure and Efficient Model Loading:** |
| - Uses **Safetensors** for faster and more secure model weight handling. |