Open4bits / Whisper Tiny FP16
This repository provides the Whisper Tiny model converted to FP16 (float16) precision, published by Open4bits to enable highly efficient inference with minimal memory usage.
The underlying Whisper model and architecture are owned by OpenAI. This repository contains only a precision-converted version of the original model weights.
The model is designed for fast, lightweight multilingual speech-to-text tasks and is well suited for resource-constrained environments.
Model Overview
Whisper is a sequence-to-sequence transformer model developed by OpenAI for automatic speech recognition and speech translation.
This release uses the Tiny variant, prioritizing speed and low memory usage while preserving the original architecture.
Model Details
- Architecture: Whisper Tiny
- Parameters: ~37.85 million
- Precision: float16 (FP16)
- Task: Automatic Speech Recognition (ASR)
- Languages: Multilingual
- Weight tying: Preserved
- Compatibility: Hugging Face Transformers, PyTorch
Compared to larger Whisper variants, this model offers significantly faster inference and lower VRAM requirements, with reduced accuracy in some scenarios.
Intended Use
This model is intended for:
- Fast speech-to-text transcription
- Lightweight and real-time ASR applications
- Edge or low-resource deployments
- Research and prototyping
Limitations
- Lower transcription accuracy compared to larger Whisper variants
- Performance depends on audio quality, language, and accent
- Not fine-tuned for domain-specific or noisy audio
License
This model is released under the Apache License 2.0. The original Whisper model and associated intellectual property are owned by OpenAI.
Support
If you find this model useful, please consider supporting the project. Your support helps us continue releasing and maintaining high-quality open models. Support us with a heart.
- Downloads last month
- 34
Model tree for Open4bits/whisper-tiny-f16
Base model
openai/whisper-tinyEvaluation results
- Test WER on LibriSpeech (clean)test set self-reported7.540
- Test WER on LibriSpeech (other)test set self-reported17.150
- Test WER on Common Voice 11.0test set self-reported141.000