This is a MXFP4_MOE quantization of the model Llama-4-Maverick-17B-128E-Instruct

Model quantized with BF16 GGUF's from: https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF

Original model: https://huggingface.co/meta-llama/Llama-4-Maverick-17B-128E-Instruct

This model's GGUF's have been removed, in order to conserve my repos use of space.
If you want it, just message me, and I will make it available on demand.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/Llama-4-Maverick-17B-128E-Instruct-MXFP4_MOE-GGUF