Otilde/Ministral-3-3B-Instruct-2512-MLX-F32
This model Otilde/Ministral-3-3B-Instruct-2512-MLX-F32 was converted to MLX format from mistralai/Ministral-3-3B-Instruct-2512 using mlx-lm version 0.30.0.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("Otilde/Ministral-3-3B-Instruct-2512-MLX-F32")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False,
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 30
Model size
3B params
Tensor type
F32
·
Hardware compatibility
Log In
to add your hardware
Quantized
Model tree for Otilde/Ministral-3-3B-Instruct-2512-MLX-F32
Base model
mistralai/Ministral-3-3B-Base-2512
Quantized
mistralai/Ministral-3-3B-Instruct-2512