Towards Pixel-Level VLM Perception via Simple Points Prediction Paper • 2601.19228 • Published 19 days ago • 17
Youtu-VL: Unleashing Visual Potential via Unified Vision-Language Supervision Paper • 2601.19798 • Published 19 days ago • 42
OCRVerse: Towards Holistic OCR in End-to-End Vision-Language Models Paper • 2601.21639 • Published 17 days ago • 49
PaddleOCR-VL-1.5: Towards a Multi-Task 0.9B VLM for Robust In-the-Wild Document Parsing Paper • 2601.21957 • Published 17 days ago • 19
CodeOCR: On the Effectiveness of Vision Language Models in Code Understanding Paper • 2602.01785 • Published 13 days ago • 93
WorldVQA: Measuring Atomic World Knowledge in Multimodal Large Language Models Paper • 2602.02537 • Published 18 days ago • 5
STEM: Scaling Transformers with Embedding Modules Paper • 2601.10639 • Published about 1 month ago • 1
SpatiaLab: Can Vision-Language Models Perform Spatial Reasoning in the Wild? Paper • 2602.03916 • Published 12 days ago • 11
EvoCUA: Evolving Computer Use Agents via Learning from Scalable Synthetic Experience Paper • 2601.15876 • Published 24 days ago • 90
MMFineReason: Closing the Multimodal Reasoning Gap via Open Data-Centric Methods Paper • 2601.21821 • Published 17 days ago • 59
P1-VL: Bridging Visual Perception and Scientific Reasoning in Physics Olympiads Paper • 2602.09443 • Published 5 days ago • 57