nielsr HF Staff commited on
Commit
81ba1f8
·
verified ·
1 Parent(s): 1094a46

Improve model card metadata and add paper link

Browse files

This pull request improves the model card for the MOSS-TTS Family by:
- Adding `library_name: transformers` metadata, as the model is compatible with the Transformers library via `AutoModel`.
- Adding `pipeline_tag: text-to-speech` to improve model discoverability.
- Linking the repository to the official research paper: [MOSS-Audio-Tokenizer: Scaling Audio Tokenizers for Future Audio Foundation Models](https://huggingface.co/papers/2602.10934).
- Including a sample usage code snippet for audio reconstruction directly from the official GitHub documentation.

Files changed (1) hide show
  1. README.md +69 -599
README.md CHANGED
@@ -1,7 +1,4 @@
1
  ---
2
- license: apache-2.0
3
- tags:
4
- - text-to-speech
5
  language:
6
  - zh
7
  - en
@@ -23,7 +20,15 @@ language:
23
  - hu
24
  - el
25
  - tr
 
 
 
 
 
 
 
26
  ---
 
27
  # MOSS-TTS Family
28
 
29
  <br>
@@ -36,10 +41,10 @@ language:
36
 
37
 
38
  <div align="center">
39
- <a href="https://github.com/OpenMOSS/MOSS-TTS/tree/main"><img src="https://img.shields.io/badge/Project%20Page-GitHub-blue"></a>
40
  <a href="https://modelscope.cn/collections/OpenMOSS-Team/MOSS-TTS"><img src="https://img.shields.io/badge/ModelScope-Models-lightgrey?logo=modelscope&amp"></a>
41
  <a href="https://mosi.cn/#models"><img src="https://img.shields.io/badge/Blog-View-blue?logo=internet-explorer&amp"></a>
42
- <a href="https://github.com/OpenMOSS/MOSS-TTS"><img src="https://img.shields.io/badge/Arxiv-Coming%20soon-red?logo=arxiv&amp"></a>
43
 
44
  <a href="https://studio.mosi.cn"><img src="https://img.shields.io/badge/AIStudio-Try-green?logo=internet-explorer&amp"></a>
45
  <a href="https://studio.mosi.cn/docs/moss-tts"><img src="https://img.shields.io/badge/API-Docs-00A3FF?logo=fastapi&amp"></a>
@@ -48,622 +53,87 @@ language:
48
  </div>
49
 
50
  ## Overview
51
- MOSS‑TTS Family is an open‑source **speech and sound generation model family** from [MOSI.AI](https://mosi.cn/#hero) and the [OpenMOSS team](https://www.open-moss.com/). It is designed for **high‑fidelity**, **high‑expressiveness**, and **complex real‑world scenarios**, covering stable long‑form speech, multi‑speaker dialogue, voice/character design, environmental sound effects, and real‑time streaming TTS.
52
-
53
-
54
- ## Introduction
55
-
56
- <p align="center">
57
- <img src="https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_imgaes_demo/moss_tts_family_arch.jpeg" width="85%" />
58
- </p>
59
-
60
-
61
- When a single piece of audio needs to **sound like a real person**, **pronounce every word accurately**, **switch speaking styles across content**, **remain stable over tens of minutes**, and **support dialogue, role‑play, and real‑time interaction**, a single TTS model is often not enough. The **MOSS‑TTS Family** breaks the workflow into five production‑ready models that can be used independently or composed into a complete pipeline.
62
-
63
- - **MOSS‑TTS**: MOSS-TTS is the flagship production TTS foundation model, centered on high-fidelity zero-shot voice cloning with controllable long-form synthesis, pronunciation, and multilingual/code-switched speech. It serves as the core engine for scalable narration, dubbing, and voice-driven products.
64
- - **MOSS‑TTSD**: MOSS-TTSD is a production long-form dialogue model for expressive multi-speaker conversational audio at scale. It supports long-duration continuity, turn-taking control, and zero-shot voice cloning from short references for podcasts, audiobooks, commentary, dubbing, and entertainment dialogue.
65
- - **MOSS‑VoiceGenerator**: MOSS-VoiceGenerator is an open-source voice design model that creates speaker timbres directly from free-form text, without reference audio. It unifies timbre design, style control, and content synthesis, and can be used standalone or as a voice-design layer for downstream TTS.
66
- - **MOSS‑SoundEffect**: MOSS-SoundEffect is a high-fidelity text-to-sound model with broad category coverage and controllable duration for real content production. It generates stable audio from prompts across ambience, urban scenes, creatures, human actions, and music-like clips for film, games, interactive media, and data synthesis.
67
- - **MOSS‑TTS‑Realtime**: MOSS-TTS-Realtime is a context-aware, multi-turn streaming TTS model for real-time voice agents. By conditioning on dialogue history across both text and prior user acoustics, it delivers low-latency synthesis with coherent, consistent voice responses across turns.
68
-
69
-
70
- ## Released Models
71
-
72
- | Model | Architecture | Size | Model Card | Hugging Face |
73
- |---|---|---:|---|---|
74
- | **MOSS-TTS** | MossTTSDelay | 8B | [moss_tts_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_tts_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS) |
75
- | | MossTTSLocal | 1.7B | [moss_tts_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_tts_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Local-Transformer) |
76
- | **MOSS‑TTSD‑V1.0** | MossTTSDelay | 8B | [moss_ttsd_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_ttsd_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTSD-v1.0) |
77
- | **MOSS‑VoiceGenerator** | MossTTSDelay | 1.7B | [moss_voice_generator_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_voice_generator_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-Voice-Generator) |
78
- | **MOSS‑SoundEffect** | MossTTSDelay | 8B | [moss_sound_effect_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_sound_effect_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-SoundEffect) |
79
- | **MOSS‑TTS‑Realtime** | MossTTSRealtime | 1.7B | [moss_tts_realtime_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_tts_realtime_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Realtime) |
80
-
81
- ## Supported Languages
82
-
83
- MOSS-TTS, MOSS-TTSD and MOSS-TTS-Realtime currently supports **20 languages**:
84
-
85
- | Language | Code | Flag | Language | Code | Flag | Language | Code | Flag |
86
- |---|---|---|---|---|---|---|---|---|
87
- | Chinese | zh | 🇨🇳 | English | en | 🇺🇸 | German | de | 🇩🇪 |
88
- | Spanish | es | 🇪🇸 | French | fr | 🇫🇷 | Japanese | ja | 🇯🇵 |
89
- | Italian | it | 🇮🇹 | Hebrew | he | 🇮🇱 | Korean | ko | 🇰🇷 |
90
- | Russian | ru | 🇷🇺 | Persian (Farsi) | fa | 🇮🇷 | Arabic | ar | 🇸🇦 |
91
- | Polish | pl | 🇵🇱 | Portuguese | pt | 🇵🇹 | Czech | cs | 🇨🇿 |
92
- | Danish | da | 🇩🇰 | Swedish | sv | 🇸🇪 | Hungarian | hu | 🇭🇺 |
93
- | Greek | el | 🇬🇷 | Turkish | tr | 🇹🇷 | | | |
94
-
95
-
96
- # MOSS-TTS
97
- ## 1. Overview
98
- ### 1.1 TTS Family Positioning
99
- MOSS-TTS is the **flagship base model** in our open-source **TTS Family**. It is designed as a production-ready synthesis backbone that can serve as the primary high-quality engine for scalable voice applications, and as a strong research baseline for controllable TTS and discrete audio token modeling.
100
-
101
- **Design goals**
102
- - **Production readiness**: robust voice cloning with stable, on-brand speaker identity at scale
103
- - **Controllability**: duration and pronunciation controls that integrate into real workflows
104
- - **Long-form stability**: consistent identity and delivery for extended narration
105
- - **Multilingual coverage**: multilingual and code-switched synthesis as first-class capabilities
106
-
107
-
108
-
109
- ### 1.2 Key Capabilities
110
-
111
- MOSS-TTS delivers state-of-the-art quality while providing the fine-grained controllability and long-form stability required for production-grade voice applications, from zero-shot cloning and hour-long narration to token- and phoneme-level control across multilingual and code-switched speech.
112
-
113
- * **State-of-the-art evaluation performance** — top-tier objective and subjective results across standard TTS benchmarks and in-house human preference testing, validating both fidelity and naturalness.
114
- * **Zero-shot Voice Cloning (Voice Clone)** — clone a target speaker’s timbre (and part of speaking style) from short reference audio, without speaker-specific fine-tuning.
115
- * **Ultra-long Speech Generation (up to 1 hour)** — support continuous long-form speech generation for up to one hour in a single run, designed for extended narration and long-session content creation.
116
- * **Token-level Duration Control** — control pacing, rhythm, pauses, and speaking rate at token resolution for precise alignment and expressive delivery.
117
- * **Phoneme-level Pronunciation Control** — supports:
118
-
119
- * pure **Pinyin** input
120
- * pure **IPA** phoneme input
121
- * mixed **Chinese / English / Pinyin / IPA** input in any combination
122
- * **Multilingual support** — high-quality multilingual synthesis with robust generalization across languages and accents.
123
- * **Code-switching** — natural mixed-language generation within a single utterance (e.g., Chinese–English), with smooth transitions, consistent speaker identity, and pronunciation-aware rendering on both sides of the switch.
124
-
125
-
126
-
127
- ### 1.3 Model Architecture
128
-
129
- MOSS-TTS includes **two complementary architectures**, both trained and released to explore different performance/latency tradeoffs and to support downstream research.
130
-
131
- **Architecture A: Delay Pattern (MossTTSDelay)**
132
- - Single Transformer backbone with **(n_vq + 1) heads**.
133
- - Uses **delay scheduling** for multi-codebook audio tokens.
134
- - Strong long-context stability, efficient inference, and production-friendly behavior.
135
 
136
- **Architecture B: Global Latent + Local Transformer (MossTTSLocal)**
137
- - Backbone produces a **global latent** per time step.
138
- - A lightweight **Local Transformer** emits a token block per step.
139
- - **Streaming-friendly** with simpler alignment (no delay scheduling).
140
 
141
- **Why train both?**
142
- - **Exploration of architectural potential** and validation across multiple generation paradigms.
143
- - **Different tradeoffs**: Delay pattern tends to be faster and more stable for long-form synthesis; Local is smaller and excels on objective benchmarks.
144
- - **Open-source value**: two strong baselines for research, ablation, and downstream innovation.
145
-
146
- For full details, see:
147
- - **[moss_tts_delay/README.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_delay/README.md)**
148
- - **[moss_tts_local/README.md](https://github.com/OpenMOSS/MOSS-TTS/tree/main/moss_tts_local)**
149
-
150
-
151
-
152
- ### 1.4 Released Models
153
-
154
- | Model | Description |
155
- |---|---|
156
- | **MossTTSDelay-8B** | **Recommended for production**. Faster inference, stronger long-context stability, and robust voice cloning quality. Best for large-scale deployment and long-form narration. |
157
- | **MossTTSLocal-1.7B** | **Recommended for evaluation and research**. Smaller model size with SOTA objective metrics. Great for quick experiments, ablations, and academic studies. |
158
-
159
- **Recommended decoding hyperparameters (per model)**
160
-
161
- | Model | audio_temperature | audio_top_p | audio_top_k | audio_repetition_penalty |
162
- |---|---:|---:|---:|---:|
163
- | **MOSS-TTSDelay-8B** | 1.7 | 0.8 | 25 | 1.0 |
164
- | **MOSS-TTSLocal-1.7B** | 1.0 | 0.95 | 50 | 1.1 |
165
-
166
- > Note: `max_new_tokens` controls duration. At 12.5 tokens per second, **1s ≈ 12.5 tokens**.
167
-
168
-
169
-
170
- ## 2. Quick Start
171
-
172
-
173
-
174
- ### Environment Setup
175
-
176
- We recommend a clean, isolated Python environment with **Transformers 5.0.0** to avoid dependency conflicts.
177
-
178
- ```bash
179
- conda create -n moss-tts python=3.12 -y
180
- conda activate moss-tts
181
- ```
182
-
183
- Install all required dependencies:
184
-
185
- ```bash
186
- git clone https://github.com/OpenMOSS/MOSS-TTS.git
187
- cd MOSS-TTS
188
- pip install --extra-index-url https://download.pytorch.org/whl/cu128 -e .
189
- ```
190
-
191
- #### (Optional) Install FlashAttention 2
192
-
193
- For better speed and lower GPU memory usage, you can install FlashAttention 2 if your hardware supports it.
194
-
195
- ```bash
196
- pip install --extra-index-url https://download.pytorch.org/whl/cu128 -e ".[flash-attn]"
197
- ```
198
-
199
- If your machine has limited RAM and many CPU cores, you can cap build parallelism:
200
-
201
- ```bash
202
- MAX_JOBS=4 pip install --extra-index-url https://download.pytorch.org/whl/cu128 -e ".[flash-attn]"
203
- ```
204
-
205
- Notes:
206
- - Dependencies are managed in `pyproject.toml`, which currently pins `torch==2.9.1+cu128` and `torchaudio==2.9.1+cu128`.
207
- - If FlashAttention 2 fails to build on your machine, you can skip it and use the default attention backend.
208
- - FlashAttention 2 is only available on supported GPUs and is typically used with `torch.float16` or `torch.bfloat16`.
209
-
210
-
211
- ### Basic Usage
212
-
213
-
214
- > Tip: For evaluation and research purposes, we recommend using **MOSS-TTSLocal-1.7B**.
215
-
216
- MOSS-TTS provides a convenient `generate` interface for rapid usage. The examples below cover:
217
- 1. Direct generation (Chinese / English / Pinyin / IPA)
218
- 2. Voice cloning
219
- 3. Duration control
220
 
221
  ```python
222
- import importlib.util
223
- from pathlib import Path
224
  import torch
 
225
  import torchaudio
226
- from transformers import AutoModel, AutoProcessor, GenerationConfig
227
- # Disable the broken cuDNN SDPA backend
228
- torch.backends.cuda.enable_cudnn_sdp(False)
229
- # Keep these enabled as fallbacks
230
- torch.backends.cuda.enable_flash_sdp(True)
231
- torch.backends.cuda.enable_mem_efficient_sdp(True)
232
- torch.backends.cuda.enable_math_sdp(True)
233
-
234
- class DelayGenerationConfig(GenerationConfig):
235
- def __init__(self, **kwargs):
236
- super().__init__(**kwargs)
237
- self.layers = kwargs.get("layers", [{} for _ in range(32)])
238
- self.do_samples = kwargs.get("do_samples", None)
239
- self.n_vq_for_inference = 32
240
-
241
- def initial_config(tokenizer, model_name_or_path):
242
- generation_config = DelayGenerationConfig.from_pretrained(model_name_or_path)
243
- generation_config.pad_token_id = tokenizer.pad_token_id
244
- generation_config.eos_token_id = 151653
245
- generation_config.max_new_tokens = 1000000
246
- generation_config.temperature = 1.0
247
- generation_config.top_p = 0.95
248
- generation_config.top_k = 100
249
- generation_config.repetition_penalty = 1.1
250
- generation_config.use_cache = True
251
- generation_config.do_sample = False
252
- return generation_config
253
-
254
-
255
- pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-TTS-Local-Transformer"
256
- device = "cuda" if torch.cuda.is_available() else "cpu"
257
- dtype = torch.bfloat16 if device == "cuda" else torch.float32
258
-
259
- def resolve_attn_implementation() -> str:
260
- # Prefer FlashAttention 2 when package + device conditions are met.
261
- if (
262
- device == "cuda"
263
- and importlib.util.find_spec("flash_attn") is not None
264
- and dtype in {torch.float16, torch.bfloat16}
265
- ):
266
- major, _ = torch.cuda.get_device_capability()
267
- if major >= 8:
268
- return "flash_attention_2"
269
-
270
- # CUDA fallback: use PyTorch SDPA kernels.
271
- if device == "cuda":
272
- return "sdpa"
273
 
274
- # CPU fallback.
275
- return "eager"
276
 
 
 
 
 
 
277
 
278
- attn_implementation = resolve_attn_implementation()
279
- print(f"[INFO] Using attn_implementation={attn_implementation}")
 
280
 
281
- processor = AutoProcessor.from_pretrained(
282
- pretrained_model_name_or_path,
283
- trust_remote_code=True,
284
- )
285
- processor.audio_tokenizer = processor.audio_tokenizer.to(device)
286
-
287
- text_1 = """亲爱的你,
288
- 你好呀。
289
-
290
- 今天,我想用最认真、最温柔的声音,对你说一些重要的话。
291
- 这些话,像一颗小小的星星,希望能在你的心里慢慢发光。
292
-
293
- 首先,我想祝你——
294
- 每天都能平平安安、快快乐乐。
295
-
296
- 希望你早上醒来的时候,
297
- 窗外有光,屋子里很安静,
298
- 你的心是轻轻的,没有着急,也没有害怕。
299
- """
300
- text_2 = """We stand on the threshold of the AI era.
301
- Artificial intelligence is no longer just a concept in laboratories, but is entering every industry, every creative endeavor, and every decision. It has learned to see, hear, speak, and think, and is beginning to become an extension of human capabilities. AI is not about replacing humans, but about amplifying human creativity, making knowledge more equitable, more efficient, and allowing imagination to reach further. A new era, jointly shaped by humans and intelligent systems, has arrived."""
302
- text_3 = "nin2 hao3,qing3 wen4 nin2 lai2 zi4 na3 zuo4 cheng2 shi4?"
303
- text_4 = "nin2 hao3,qing4 wen3 nin2 lai2 zi4 na4 zuo3 cheng4 shi3?"
304
- text_5 = "您好,请问您来自哪 zuo4 cheng2 shi4?"
305
- text_6 = "/həloʊ, meɪ aɪ æsk wɪtʃ sɪti juː ɑːr frʌm?/"
306
-
307
- # Use audio from ./assets/audio to avoid downloading from the cloud.
308
- ref_audio_1 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_zh.wav"
309
- ref_audio_2 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_en.m4a"
310
-
311
- conversations = [
312
- # Direct TTS (no reference)
313
- [
314
- processor.build_user_message(text=text_1)
315
- ],
316
- [
317
- processor.build_user_message(text=text_2)
318
- ],
319
- # Pinyin or IPA input
320
- [
321
- processor.build_user_message(text=text_3)
322
- ],
323
- [
324
- processor.build_user_message(text=text_4)
325
- ],
326
- [
327
- processor.build_user_message(text=text_5)
328
- ],
329
- [
330
- processor.build_user_message(text=text_6)
331
- ],
332
- # Voice cloning (with reference)
333
- [
334
- processor.build_user_message(text=text_1, reference=[ref_audio_1])
335
- ],
336
- [
337
- processor.build_user_message(text=text_2, reference=[ref_audio_2])
338
- ],
339
- ]
340
-
341
- model = AutoModel.from_pretrained(
342
- pretrained_model_name_or_path,
343
- trust_remote_code=True,
344
- attn_implementation=attn_implementation,
345
- torch_dtype=dtype,
346
- ).to(device)
347
- model.eval()
348
-
349
- generation_config = initial_config(processor.tokenizer, pretrained_model_name_or_path)
350
- generation_config.n_vq_for_inference = model.channels - 1
351
- generation_config.do_samples = [True] * model.channels
352
- generation_config.layers = [
353
- {
354
- "repetition_penalty": 1.0,
355
- "temperature": 1.5,
356
- "top_p": 1.0,
357
- "top_k": 50
358
- }
359
- ] + [
360
- {
361
- "repetition_penalty": 1.1,
362
- "temperature": 1.0,
363
- "top_p": 0.95,
364
- "top_k": 50
365
- }
366
- ] * (model.channels - 1)
367
-
368
- batch_size = 1
369
-
370
- save_dir = Path(f"inference_root_moss_tts_local_transformer_generation")
371
- save_dir.mkdir(exist_ok=True, parents=True)
372
- sample_idx = 0
373
- with torch.no_grad():
374
- for start in range(0, len(conversations), batch_size):
375
- batch_conversations = conversations[start : start + batch_size]
376
- batch = processor(batch_conversations, mode="generation")
377
- input_ids = batch["input_ids"].to(device)
378
- attention_mask = batch["attention_mask"].to(device)
379
-
380
- outputs = model.generate(
381
- input_ids=input_ids,
382
- attention_mask=attention_mask,
383
- generation_config=generation_config
384
- )
385
-
386
- for message in processor.decode(outputs):
387
- audio = message.audio_codes_list[0]
388
- out_path = save_dir / f"sample{sample_idx}.wav"
389
- sample_idx += 1
390
- torchaudio.save(out_path, audio.unsqueeze(0), processor.model_config.sampling_rate)
391
 
 
 
392
  ```
393
 
394
- ### Continuation + Voice Cloning (Prefix Audio + Text)
395
-
396
- MOSS-TTS supports continuation-based cloning: provide a prefix audio clip in the assistant message, and make sure the **prefix transcript** is included in the text. The model continues in the same speaker identity and style.
397
-
398
- ```python
399
- import importlib.util
400
- from pathlib import Path
401
- import torch
402
- import torchaudio
403
- from transformers import AutoModel, AutoProcessor, GenerationConfig
404
- # Disable the broken cuDNN SDPA backend
405
- torch.backends.cuda.enable_cudnn_sdp(False)
406
- # Keep these enabled as fallbacks
407
- torch.backends.cuda.enable_flash_sdp(True)
408
- torch.backends.cuda.enable_mem_efficient_sdp(True)
409
- torch.backends.cuda.enable_math_sdp(True)
410
-
411
- class DelayGenerationConfig(GenerationConfig):
412
- def __init__(self, **kwargs):
413
- super().__init__(**kwargs)
414
- self.layers = kwargs.get("layers", [{} for _ in range(32)])
415
- self.do_samples = kwargs.get("do_samples", None)
416
- self.n_vq_for_inference = 32
417
-
418
- def initial_config(tokenizer, model_name_or_path):
419
- generation_config = DelayGenerationConfig.from_pretrained(model_name_or_path)
420
- generation_config.pad_token_id = tokenizer.pad_token_id
421
- generation_config.eos_token_id = 151653
422
- generation_config.max_new_tokens = 1000000
423
- generation_config.temperature = 1.0
424
- generation_config.top_p = 0.95
425
- generation_config.top_k = 100
426
- generation_config.repetition_penalty = 1.1
427
- generation_config.use_cache = True
428
- generation_config.do_sample = False
429
- return generation_config
430
-
431
-
432
- pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-TTS-Local-Transformer"
433
- device = "cuda" if torch.cuda.is_available() else "cpu"
434
- dtype = torch.bfloat16 if device == "cuda" else torch.float32
435
-
436
- def resolve_attn_implementation() -> str:
437
- # Prefer FlashAttention 2 when package + device conditions are met.
438
- if (
439
- device == "cuda"
440
- and importlib.util.find_spec("flash_attn") is not None
441
- and dtype in {torch.float16, torch.bfloat16}
442
- ):
443
- major, _ = torch.cuda.get_device_capability()
444
- if major >= 8:
445
- return "flash_attention_2"
446
-
447
- # CUDA fallback: use PyTorch SDPA kernels.
448
- if device == "cuda":
449
- return "sdpa"
450
-
451
- # CPU fallback.
452
- return "eager"
453
-
454
-
455
- attn_implementation = resolve_attn_implementation()
456
- print(f"[INFO] Using attn_implementation={attn_implementation}")
457
-
458
- processor = AutoProcessor.from_pretrained(
459
- pretrained_model_name_or_path,
460
- trust_remote_code=True,
461
- )
462
- processor.audio_tokenizer = processor.audio_tokenizer.to(device)
463
-
464
- text_1 = """亲爱的你,
465
- 你好呀。
466
-
467
- 今天,我想用最认真、最温柔的声音,对你说一些重要的话。
468
- 这些话,像一颗小小的星星,希望能在你的心里慢慢发光。
469
-
470
- 首先,我想祝你——
471
- 每天都能平平安安、快快乐乐。
472
-
473
- 希望你早上醒来的时候,
474
- 窗外有光,屋子里很安静,
475
- 你的心是轻轻的,没有着急,也没有害怕。
476
- """
477
-
478
- ref_text_1 = "太阳系八大行星之一。"
479
- # Use audio from ./assets/audio to avoid downloading from the cloud.
480
- ref_audio_1 = "https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_demo/reference_zh.wav"
481
-
482
- conversations = [
483
- # Continuatoin only
484
- [
485
- processor.build_user_message(text=ref_text_1 + text_1),
486
- processor.build_assistant_message(audio_codes_list=[ref_audio_1])
487
- ],
488
- ]
489
-
490
- model = AutoModel.from_pretrained(
491
- pretrained_model_name_or_path,
492
- trust_remote_code=True,
493
- attn_implementation=attn_implementation,
494
- torch_dtype=dtype,
495
- ).to(device)
496
- model.eval()
497
-
498
- generation_config = initial_config(processor.tokenizer, pretrained_model_name_or_path)
499
- generation_config.n_vq_for_inference = model.channels - 1
500
- generation_config.do_samples = [True] * model.channels
501
- generation_config.layers = [
502
- {
503
- "repetition_penalty": 1.0,
504
- "temperature": 1.5,
505
- "top_p": 1.0,
506
- "top_k": 50
507
- }
508
- ] + [
509
- {
510
- "repetition_penalty": 1.1,
511
- "temperature": 1.0,
512
- "top_p": 0.95,
513
- "top_k": 50
514
- }
515
- ] * (model.channels - 1)
516
-
517
- batch_size = 1
518
-
519
- save_dir = Path("inference_root_moss_tts_local_transformer_continuation")
520
- save_dir.mkdir(exist_ok=True, parents=True)
521
- sample_idx = 0
522
- with torch.no_grad():
523
- for start in range(0, len(conversations), batch_size):
524
- batch_conversations = conversations[start : start + batch_size]
525
- batch = processor(batch_conversations, mode="continuation")
526
- input_ids = batch["input_ids"].to(device)
527
- attention_mask = batch["attention_mask"].to(device)
528
-
529
- outputs = model.generate(
530
- input_ids=input_ids,
531
- attention_mask=attention_mask,
532
- generation_config=generation_config
533
- )
534
-
535
- for message in processor.decode(outputs):
536
- audio = message.audio_codes_list[0]
537
- out_path = save_dir / f"sample{sample_idx}.wav"
538
- sample_idx += 1
539
- torchaudio.save(out_path, audio.unsqueeze(0), processor.model_config.sampling_rate)
540
-
541
- ```
542
 
 
 
 
543
 
544
 
545
- ### Input Types
546
 
547
- **UserMessage**
 
 
 
 
548
 
549
- | Field | Type | Required | Description |
550
- |---|---|---:|---|
551
- | `text` | `str` | Yes | Text to synthesize. Supports Chinese, English, German, French, Spanish, Japanese, Korean, etc. Can mix raw text with Pinyin or IPA for pronunciation control. |
552
- | `reference` | `List[str]` | No | Reference audio for voice cloning. For current MOSS-TTS, **one audio** is expected in the list. |
553
- | `tokens` | `int` | No | Expected number of audio tokens. **1s ≈ 12.5 tokens**. |
554
 
555
- **AssistantMessage**
556
 
557
- | Field | Type | Required | Description |
558
  |---|---|---:|---|
559
- | `audio_codes_list` | `List[str]` | Only for continuation | Prefix audio for continuation-based cloning. Use audio file paths or URLs. |
560
-
561
-
562
-
563
- ### Generation Hyperparameters (MOSS-TTS-Local)
564
-
565
- MOSS-TTSLocal utilizes `DelayGenerationConfig` to manage hierarchical sampling. Due to the **Progressive Sequence Dropout** training mechanism, the model supports variable bitrate inference by adjusting the RVQ depth.
566
-
567
- | Parameter | Type | Recommended (Audio Layers) | Description |
568
- | :--- | :--- | :---: | :--- |
569
- | `max_new_tokens` | `int` | — | Controls total generated audio tokens. **1s ≈ 12.5 tokens**. |
570
- | `n_vq_for_inference` | `int` | 32 | **RVQ Inference Depth**: Controls the number of codebook layers generated. Higher values (max 32) improve audio fidelity but slow down inference; lower values speed up inference but reduce audio quality. |
571
- | `audio_temperature` | `float` | 1.0 | Temperature for audio token layers (Layer 1+). Lower values ensure more stable and consistent acoustic reconstruction. |
572
- | `audio_top_p` | `float` | 0.95 | Nucleus sampling cutoff for audio layers. |
573
- | `audio_top_k` | `int` | 50 | Top-K sampling filter for audio layers. |
574
- | `audio_repetition_penalty` | `float` | 1.1 | Discourages repeating acoustic patterns. Values > 1.0 help prevent artifacts in long-form synthesis. |
575
-
576
-
577
-
578
- ### Pinyin Input
579
-
580
- Use tone-numbered Pinyin such as `ni3 hao3 wo3 men1`. You can convert Chinese text with [pypinyin](https://github.com/mozillazg/python-pinyin), then adjust tones for pronunciation control.
581
-
582
- ```python
583
- import re
584
- from pypinyin import pinyin, Style
585
-
586
- CN_PUNCT = r",。!?;:、()“”‘’"
587
-
588
-
589
- def fix_punctuation_spacing(s: str) -> str:
590
- s = re.sub(rf"\s+([{CN_PUNCT}])", r"\1", s)
591
- s = re.sub(rf"([{CN_PUNCT}])\s+", r"\1", s)
592
- return s
593
-
594
-
595
- def zh_to_pinyin_tone3(text: str, strict: bool = True) -> str:
596
- result = pinyin(
597
- text,
598
- style=Style.TONE3,
599
- heteronym=False,
600
- strict=strict,
601
- errors="default",
602
- )
603
-
604
- s = " ".join(item[0] for item in result)
605
- return fix_punctuation_spacing(s)
606
-
607
- text = zh_to_pinyin_tone3("您好,请问您来自哪座城市?")
608
- print(text)
609
-
610
- # Expected: nin2 hao3,qing3 wen4 nin2 lai2 zi4 na3 zuo4 cheng2 shi4?
611
- # Try: nin2 hao3,qing4 wen3 nin2 lai2 zi4 na4 zuo3 cheng4 shi3?
612
- ```
613
-
614
-
615
-
616
- ### IPA Input
617
-
618
- Use `/.../` to wrap IPA sequences so they are distinct from normal text. You can use [DeepPhonemizer](https://github.com/spring-media/DeepPhonemizer) to convert English paragraphs or words into IPA sequences.
619
-
620
- ```python
621
- from dp.phonemizer import Phonemizer
622
-
623
- # Download a phonemizer checkpoint from https://public-asai-dl-models.s3.eu-central-1.amazonaws.com/DeepPhonemizer/en_us_cmudict_ipa_forward.pt
624
- model_path = "<path-to-phonemizer-checkpoint>"
625
- phonemizer = Phonemizer.from_checkpoint(model_path)
626
-
627
- english_texts = "Hello, may I ask which city you are from?"
628
- phoneme_outputs = phonemizer(
629
- english_texts,
630
- lang="en_us",
631
- batch_size=8
632
- )
633
- model_input_text = f"/{phoneme_outputs}/"
634
- print(model_input_text)
635
-
636
- # Expected: /həloʊ, meɪ aɪ æsk wɪtʃ sɪti juː ɑːr frʌm?/
637
- ```
638
 
 
639
 
 
640
 
641
- ## 3. Evaluation
642
- MOSS-TTS achieved state-of-the-art results on the open-source zero-shot TTS benchmark Seed-TTS-eval, not only surpassing all open-source models but also rivaling the most powerful closed-source models.
643
 
644
- | Model | Params | Open-source | EN WER (%) ↓ | EN SIM (%) ↑ | ZH CER (%) ↓ | ZH SIM (%) ↑ |
645
- |---|---:|:---:|---:|---:|---:|---:|
646
- | DiTAR | 0.6B | ❌ | 1.69 | 73.5 | 1.02 | 75.3 |
647
- | FishAudio-S1 | 4B | ❌ | 1.72 | 62.57 | 1.22 | 72.1 |
648
- | Seed-TTS | | ❌ | 2.25 | 76.2 | 1.12 | 79.6 |
649
- | MiniMax-Speech | | ❌ | 1.65 | 69.2 | 0.83 | 78.3 |
650
- | | | | | | | |
651
- | CosyVoice | 0.3B | ✅ | 4.29 | 60.9 | 3.63 | 72.3 |
652
- | CosyVoice2 | 0.5B | ✅ | 3.09 | 65.9 | 1.38 | 75.7 |
653
- | CosyVoice3 | 0.5B | | 2.02 | 71.8 | 1.16 | 78 |
654
- | CosyVoice3 | 1.5B | | 2.22 | 72 | 1.12 | 78.1 |
655
- | F5-TTS | 0.3B | ✅ | 2 | 67 | 1.53 | 76 |
656
- | SparkTTS | 0.5B | ✅ | 3.14 | 57.3 | 1.54 | 66 |
657
- | FireRedTTS | 0.5B | ✅ | 3.82 | 46 | 1.51 | 63.5 |
658
- | FireRedTTS-2 | 1.5B | ✅ | 1.95 | 66.5 | 1.14 | 73.6 |
659
- | Qwen2.5-Omni | 7B | ✅ | 2.72 | 63.2 | 1.7 | 75.2 |
660
- | FishAudio-S1-mini | 0.5B | ✅ | 1.94 | 55 | 1.18 | 68.5 |
661
- | IndexTTS2 | 1.5B | ✅ | 2.23 | 70.6 | 1.03 | 76.5 |
662
- | VibeVoice | 1.5B | ✅ | 3.04 | 68.9 | 1.16 | 74.4 |
663
- | HiggsAudio-v2 | 3B | ✅ | 2.44 | 67.7 | 1.5 | 74 |
664
- | VoxCPM | 0.5B | ✅ | 1.85 | 72.9 | **0.93** | 77.2 |
665
- | Qwen3-TTS | 0.6B | ✅ | 1.68 | 70.39 | 1.23 | 76.4 |
666
- | Qwen3-TTS | 1.7B | ✅ | **1.5** | 71.45 | 1.33 | 76.72 |
667
- | | | | | | | |
668
- | MossTTSDelay | 8B | ✅ | 1.79 | 71.46 | 1.32 | 77.05 |
669
- | MossTTSLocal | 1.7B | ✅ | 1.85 | **73.42** | 1.2 | **78.82** |
 
1
  ---
 
 
 
2
  language:
3
  - zh
4
  - en
 
20
  - hu
21
  - el
22
  - tr
23
+ license: apache-2.0
24
+ library_name: transformers
25
+ pipeline_tag: text-to-speech
26
+ tags:
27
+ - text-to-speech
28
+ - audio-tokenizer
29
+ - moss
30
  ---
31
+
32
  # MOSS-TTS Family
33
 
34
  <br>
 
41
 
42
 
43
  <div align="center">
44
+ <a href="https://github.com/OpenMOSS/MOSS-Audio-Tokenizer"><img src="https://img.shields.io/badge/Project%20Page-GitHub-blue"></a>
45
  <a href="https://modelscope.cn/collections/OpenMOSS-Team/MOSS-TTS"><img src="https://img.shields.io/badge/ModelScope-Models-lightgrey?logo=modelscope&amp"></a>
46
  <a href="https://mosi.cn/#models"><img src="https://img.shields.io/badge/Blog-View-blue?logo=internet-explorer&amp"></a>
47
+ <a href="https://huggingface.co/papers/2602.10934"><img src="https://img.shields.io/badge/Arxiv-2602.10934-red?logo=arxiv&amp"></a>
48
 
49
  <a href="https://studio.mosi.cn"><img src="https://img.shields.io/badge/AIStudio-Try-green?logo=internet-explorer&amp"></a>
50
  <a href="https://studio.mosi.cn/docs/moss-tts"><img src="https://img.shields.io/badge/API-Docs-00A3FF?logo=fastapi&amp"></a>
 
53
  </div>
54
 
55
  ## Overview
56
+ MOSS‑TTS Family is an open‑source **speech and sound generation model family** from [MOSI.AI](https://mosi.cn/#hero) and the [OpenMOSS team](https://www.open-moss.com/). It is built upon the **MOSS-Audio-Tokenizer**, a unified discrete audio tokenizer based on the **CAT** (Causal Audio Tokenizer with Transformer) architecture presented in the paper [MOSS-Audio-Tokenizer: Scaling Audio Tokenizers for Future Audio Foundation Models](https://huggingface.co/papers/2602.10934).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
 
58
+ ## Sample Usage (Audio Reconstruction)
 
 
 
59
 
60
+ The tokenizer can be used to compress audio into discrete tokens and reconstruct it back into waveforms.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
 
62
  ```python
 
 
63
  import torch
64
+ from transformers import AutoModel
65
  import torchaudio
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
 
67
+ repo_id = "OpenMOSS-Team/MOSS-Audio-Tokenizer"
68
+ model = AutoModel.from_pretrained(repo_id, trust_remote_code=True).eval()
69
 
70
+ # Load and resample audio
71
+ wav, sr = torchaudio.load('path_to_audio.wav')
72
+ if sr != model.sampling_rate:
73
+ wav = torchaudio.functional.resample(wav, sr, model.sampling_rate)
74
+ wav = wav.unsqueeze(0)
75
 
76
+ # Encode audio to tokens
77
+ enc = model.encode(wav, return_dict=True)
78
+ print(f"enc.audio_codes.shape: {enc.audio_codes.shape}")
79
 
80
+ # Decode tokens back to audio
81
+ dec = model.decode(enc.audio_codes, return_dict=True)
82
+ print(f"dec.audio.shape: {dec.audio.shape}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83
 
84
+ wav_rec = dec.audio.squeeze(0)
85
+ torchaudio.save("reconstructed.wav", wav_rec, sample_rate=model.sampling_rate)
86
  ```
87
 
88
+ ## Introduction
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89
 
90
+ <p align="center">
91
+ <img src="https://speech-demo.oss-cn-shanghai.aliyuncs.com/moss_tts_demo/tts_readme_imgaes_demo/moss_tts_family_arch.jpeg" width="85%" />
92
+ </p>
93
 
94
 
95
+ When a single piece of audio needs to **sound like a real person**, **pronounce every word accurately**, **switch speaking styles across content**, **remain stable over tens of minutes**, and **support dialogue, role‑play, and real‑time interaction**, a single TTS model is often not enough. The **MOSS‑TTS Family** breaks the workflow into five production‑ready models that can be used independently or composed into a complete pipeline.
96
 
97
+ - **MOSS‑TTS**: MOSS-TTS is the flagship production TTS foundation model, centered on high-fidelity zero-shot voice cloning with controllable long-form synthesis, pronunciation, and multilingual/code-switched speech.
98
+ - **MOSS‑TTSD**: MOSS-TTSD is a production long-form dialogue model for expressive multi-speaker conversational audio at scale.
99
+ - **MOSS‑VoiceGenerator**: MOSS-VoiceGenerator is an open-source voice design model that creates speaker timbres directly from free-form text.
100
+ - **MOSS‑SoundEffect**: MOSS-SoundEffect is a high-fidelity text-to-sound model with broad category coverage and controllable duration.
101
+ - **MOSS‑TTS‑Realtime**: MOSS-TTS-Realtime is a context-aware, multi-turn streaming TTS model for real-time voice agents.
102
 
 
 
 
 
 
103
 
104
+ ## Released Models
105
 
106
+ | Model | Architecture | Size | Hugging Face |
107
  |---|---|---:|---|
108
+ | **MOSS-TTS** | MossTTSDelay | 8B | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS) |
109
+ | | MossTTSLocal | 1.7B | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Local-Transformer) |
110
+ | **MOSS‑TTSD‑V1.0** | MossTTSDelay | 8B | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTSD-v1.0) |
111
+ | **MOSS‑VoiceGenerator** | MossTTSDelay | 1.7B | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-Voice-Generator) |
112
+ | **MOSS‑SoundEffect** | MossTTSDelay | 8B | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-SoundEffect) |
113
+ | **MOSS‑TTS‑Realtime** | MossTTSRealtime | 1.7B | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Realtime) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
114
 
115
+ ## Supported Languages
116
 
117
+ MOSS-TTS, MOSS-TTSD and MOSS-TTS-Realtime currently supports **20 languages**: Chinese, English, German, Spanish, French, Japanese, Italian, Hebrew, Korean, Russian, Persian (Farsi), Arabic, Polish, Portuguese, Czech, Danish, Swedish, Hungarian, Greek, and Turkish.
118
 
119
+ ## Evaluation
120
+ MOSS-TTS achieved state-of-the-art results on the zero-shot TTS benchmark Seed-TTS-eval, rivaling the most powerful closed-source systems.
121
 
122
+ | Model | EN WER (%) ↓ | EN SIM (%) ↑ | ZH CER (%) ↓ | ZH SIM (%) ↑ |
123
+ |---|---:|---:|---:|---:|
124
+ | MossTTSDelay (8B) | 1.79 | 71.46 | 1.32 | 77.05 |
125
+ | MossTTSLocal (1.7B) | 1.85 | **73.42** | 1.2 | **78.82** |
126
+
127
+ ## Citation
128
+ If you use this code or result in your research, please cite:
129
+ ```tex
130
+ @misc{gong2026mossaudiotokenizerscalingaudiotokenizers,
131
+ title={MOSS-Audio-Tokenizer: Scaling Audio Tokenizers for Future Audio Foundation Models},
132
+ author={Yitian Gong and Kuangwei Chen and Zhaoye Fei and Xiaogui Yang and Ke Chen and Yang Wang and Kexin Huang and Mingshu Chen and Ruixiao Li and Qingyuan Cheng and Shimin Li and Xipeng Qiu},
133
+ year={2026},
134
+ eprint={2602.10934},
135
+ archivePrefix={arXiv},
136
+ primaryClass={cs.SD},
137
+ url={https://arxiv.org/abs/2602.10934},
138
+ }
139
+ ```