Incredible Model, Thank You!
I just wanted to take a moment to thank the development team at LightOn AI for this model.
I process very complex legal documents that can have strange formatting, mixture of one and two columns on the same page, inconsistent fonts and font sizes, variable headers, unusual signature blocks, tables with definitions, etc. Not only is this model perfectly accurate in my tests so far, but the speed is staggering. I previously had to use large visual models that could comprehend the content and parse out the formatting, and it could take up to one minute per page at times for dense agreements. This model retrieves perfect content in less than 2 seconds per page (RTX 6000 Pro or RTX 5090). And it basically uses no VRAM compared to the larger models I was using previously. And does not require any specialized pipeline, I dropped this model in my OCR pipeline and it worked on the first shot. This is a miracle! :)
If there are any VCs/PE firms looking for a great investment, call up this company and see if they want your money! This is an amazing product that will kill in enterprise.
Thank you again LightOn team, the community appreciates your work here!
MD
My use case is also legal documents, and I agree this model is really great. I have gotten far better results with this model than deepseek OCR 1 and 2. I appreciate how it retains page numbers in the headers and footers. This is important because lawyers need to cite page numbers. Thank you.
My use case is also legal documents, and I agree this model is really great. I have gotten far better results with this model than deepseek OCR 1 and 2. I appreciate how it retains page numbers in the headers and footers. This is important because lawyers need to cite page numbers. Thank you.
same here. as a lawyer, i work with badly scanned documents A LOT. so i've (we've?) built an app to run local ocr models locally with mlx-vlm and pretty ui using 5.3-codex via codex app
i've used 4 recently released models converted to mlx format by mlx-community:
- DeepSeek-OCR-2-bf16
- GLM-OCR-bf16
- LightOnOCR-2-1B-bf16
- PaddleOCR-VL-1.5-bf16
i've tested them all a bit and so far the most usable one is LightOnOCR-2 by @lightonai. tbh i thought i was going to be deepseek-ocr-2 because (i) their model is bigger (3b vs 1b) and (ii) c'mon it's deepseek
idk maybe the reason other models are lagging has smth to do with how they were converted to mlx
will keep testing but huge thanks to lighton team for sharing a great open model with the community! i'll definitely use their model a lot (until a better one comes out obviously)
what a time to be alive!
(this is mostly a copy of my recent tweet - just thought it'd be nice to repeat it here to once again thank the team - you did a great job!)