Video-Text-to-Text
Transformers
Safetensors
English
internvl_chat
feature-extraction
multimodal
custom_code
Eval Results (legacy)
Instructions to use OpenGVLab/InternVideo2_5_Chat_8B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OpenGVLab/InternVideo2_5_Chat_8B with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("OpenGVLab/InternVideo2_5_Chat_8B", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
Can it run on Mac Silicon?
#7
by Santiagolc - opened
Does anybody know if it is possible to run this model on a Mac with silicon chips? I am not able to install neither flash attention nor decord.
I tried running it on my mac. I tried couple of workarounds:
- I solved issue with decord using eva-decord (https://pypi.org/project/eva-decord/)
- To solve issue with flash attention and imports I tried using the suggestion provided here https://huggingface.co/qnguyen3/nanoLLaVA-1.5/discussions/4
Unfortunately, I got MPS backend out of memory error when using mps device. But I assume if you have enough memory on MPS you can run it or you can run it on cpu.