--- title: LFM2.5-Audio WebGPU Demo emoji: 🎙️ colorFrom: indigo colorTo: purple sdk: static app_file: dist/index.html app_build_command: npm run build pinned: false license: other models: - LiquidAI/LFM2.5-Audio-1.5B-ONNX tags: - audio - speech - tts - asr - webgpu - onnx - transformers.js short_description: ASR, TTS & conversational audio in your browser --- # LFM2.5-Audio WebGPU Demo This Space demonstrates [LFM2.5-Audio-1.5B](https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B) running entirely in your browser using WebGPU and ONNX Runtime Web. ## Features - **ASR (Speech Recognition)**: Transcribe audio to text - **TTS (Text-to-Speech)**: Convert text to natural speech - **Interleaved**: Mixed audio and text conversation ## Requirements - A browser with WebGPU support (Chrome/Edge 113+) - Enable WebGPU at `chrome://flags/#enable-unsafe-webgpu` if needed ## Model Uses quantized ONNX models from [LiquidAI/LFM2.5-Audio-1.5B-ONNX](https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B-ONNX). ## License Model weights are released under the [LFM 1.0 License](https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B-ONNX/blob/main/LICENSE).