nielsr's picture
nielsr HF Staff
Improve model card metadata and add paper link
81ba1f8 verified
|
raw
history blame
6.4 kB
metadata
language:
  - zh
  - en
  - de
  - es
  - fr
  - ja
  - it
  - he
  - ko
  - ru
  - fa
  - ar
  - pl
  - pt
  - cs
  - da
  - sv
  - hu
  - el
  - tr
license: apache-2.0
library_name: transformers
pipeline_tag: text-to-speech
tags:
  - text-to-speech
  - audio-tokenizer
  - moss

MOSS-TTS Family


    

Overview

MOSS‑TTS Family is an open‑source speech and sound generation model family from MOSI.AI and the OpenMOSS team. It is built upon the MOSS-Audio-Tokenizer, a unified discrete audio tokenizer based on the CAT (Causal Audio Tokenizer with Transformer) architecture presented in the paper MOSS-Audio-Tokenizer: Scaling Audio Tokenizers for Future Audio Foundation Models.

Sample Usage (Audio Reconstruction)

The tokenizer can be used to compress audio into discrete tokens and reconstruct it back into waveforms.

import torch
from transformers import AutoModel
import torchaudio

repo_id = "OpenMOSS-Team/MOSS-Audio-Tokenizer"
model = AutoModel.from_pretrained(repo_id, trust_remote_code=True).eval()

# Load and resample audio
wav, sr = torchaudio.load('path_to_audio.wav')
if sr != model.sampling_rate:
    wav = torchaudio.functional.resample(wav, sr, model.sampling_rate)
wav = wav.unsqueeze(0)

# Encode audio to tokens
enc = model.encode(wav, return_dict=True)
print(f"enc.audio_codes.shape: {enc.audio_codes.shape}")

# Decode tokens back to audio
dec = model.decode(enc.audio_codes, return_dict=True)
print(f"dec.audio.shape: {dec.audio.shape}")

wav_rec = dec.audio.squeeze(0)
torchaudio.save("reconstructed.wav", wav_rec, sample_rate=model.sampling_rate)

Introduction

When a single piece of audio needs to sound like a real person, pronounce every word accurately, switch speaking styles across content, remain stable over tens of minutes, and support dialogue, role‑play, and real‑time interaction, a single TTS model is often not enough. The MOSS‑TTS Family breaks the workflow into five production‑ready models that can be used independently or composed into a complete pipeline.

  • MOSS‑TTS: MOSS-TTS is the flagship production TTS foundation model, centered on high-fidelity zero-shot voice cloning with controllable long-form synthesis, pronunciation, and multilingual/code-switched speech.
  • MOSS‑TTSD: MOSS-TTSD is a production long-form dialogue model for expressive multi-speaker conversational audio at scale.
  • MOSS‑VoiceGenerator: MOSS-VoiceGenerator is an open-source voice design model that creates speaker timbres directly from free-form text.
  • MOSS‑SoundEffect: MOSS-SoundEffect is a high-fidelity text-to-sound model with broad category coverage and controllable duration.
  • MOSS‑TTS‑Realtime: MOSS-TTS-Realtime is a context-aware, multi-turn streaming TTS model for real-time voice agents.

Released Models

Model Architecture Size Hugging Face
MOSS-TTS MossTTSDelay 8B 🤗 Huggingface
MossTTSLocal 1.7B 🤗 Huggingface
MOSS‑TTSD‑V1.0 MossTTSDelay 8B 🤗 Huggingface
MOSS‑VoiceGenerator MossTTSDelay 1.7B 🤗 Huggingface
MOSS‑SoundEffect MossTTSDelay 8B 🤗 Huggingface
MOSS‑TTS‑Realtime MossTTSRealtime 1.7B 🤗 Huggingface

Supported Languages

MOSS-TTS, MOSS-TTSD and MOSS-TTS-Realtime currently supports 20 languages: Chinese, English, German, Spanish, French, Japanese, Italian, Hebrew, Korean, Russian, Persian (Farsi), Arabic, Polish, Portuguese, Czech, Danish, Swedish, Hungarian, Greek, and Turkish.

Evaluation

MOSS-TTS achieved state-of-the-art results on the zero-shot TTS benchmark Seed-TTS-eval, rivaling the most powerful closed-source systems.

Model EN WER (%) ↓ EN SIM (%) ↑ ZH CER (%) ↓ ZH SIM (%) ↑
MossTTSDelay (8B) 1.79 71.46 1.32 77.05
MossTTSLocal (1.7B) 1.85 73.42 1.2 78.82

Citation

If you use this code or result in your research, please cite:

@misc{gong2026mossaudiotokenizerscalingaudiotokenizers,
      title={MOSS-Audio-Tokenizer: Scaling Audio Tokenizers for Future Audio Foundation Models}, 
      author={Yitian Gong and Kuangwei Chen and Zhaoye Fei and Xiaogui Yang and Ke Chen and Yang Wang and Kexin Huang and Mingshu Chen and Ruixiao Li and Qingyuan Cheng and Shimin Li and Xipeng Qiu},
      year={2026},
      eprint={2602.10934},
      archivePrefix={arXiv},
      primaryClass={cs.SD},
      url={https://arxiv.org/abs/2602.10934}, 
}