shreyask/voxtral-mini-4b-realtime-mlx-mixed-4-6

This model was converted to MLX format from shreyask/voxtral-mini-4b-realtime-mlx-fp16 using mlx-audio version 0.3.2.

Refer to the original model card for more details on the model.

Use with mlx-audio

pip install -U mlx-audio

CLI Example:

python -m mlx_audio.stt.generate --model shreyask/voxtral-mini-4b-realtime-mlx-mixed-4-6 --audio "audio.wav"

Python Example:

from mlx_audio.stt.utils import load_model
from mlx_audio.stt.generate import generate_transcription

model = load_model("shreyask/voxtral-mini-4b-realtime-mlx-mixed-4-6")
transcription = generate_transcription(
    model=model,
    audio_path="path_to_audio.wav",
    output_path="path_to_output.txt",
    format="txt",
    verbose=True,
)
print(transcription.text)
Downloads last month
93
Safetensors
Model size
1B params
Tensor type
F32
F16
U32
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support