Files changed (3) hide show
  1. README.md +117 -50
  2. modeling_moss_tts.py +0 -103
  3. processing_moss_tts.py +1 -1
README.md CHANGED
@@ -39,7 +39,7 @@ language:
39
  <a href="https://github.com/OpenMOSS/MOSS-TTS/tree/main"><img src="https://img.shields.io/badge/Project%20Page-GitHub-blue"></a>
40
  <a href="https://modelscope.cn/collections/OpenMOSS-Team/MOSS-TTS"><img src="https://img.shields.io/badge/ModelScope-Models-lightgrey?logo=modelscope&amp"></a>
41
  <a href="https://mosi.cn/#models"><img src="https://img.shields.io/badge/Blog-View-blue?logo=internet-explorer&amp"></a>
42
- <a href="https://arxiv.org/abs/2603.18090"><img src="https://img.shields.io/badge/Arxiv-2603.18090-red?logo=Arxiv&amp"></a>
43
 
44
  <a href="https://studio.mosi.cn"><img src="https://img.shields.io/badge/AIStudio-Try-green?logo=internet-explorer&amp"></a>
45
  <a href="https://studio.mosi.cn/docs/moss-tts"><img src="https://img.shields.io/badge/API-Docs-00A3FF?logo=fastapi&amp"></a>
@@ -47,7 +47,6 @@ language:
47
  <a href="https://discord.gg/fvm5TaWjU3"><img src="https://img.shields.io/badge/Discord-Join-5865F2?logo=discord&amp"></a>
48
  </div>
49
 
50
-
51
  ## Overview
52
  MOSS‑TTS Family is an open‑source **speech and sound generation model family** from [MOSI.AI](https://mosi.cn/#hero) and the [OpenMOSS team](https://www.open-moss.com/). It is designed for **high‑fidelity**, **high‑expressiveness**, and **complex real‑world scenarios**, covering stable long‑form speech, multi‑speaker dialogue, voice/character design, environmental sound effects, and real‑time streaming TTS.
53
 
@@ -61,37 +60,23 @@ MOSS‑TTS Family is an open‑source **speech and sound generation model family
61
 
62
  When a single piece of audio needs to **sound like a real person**, **pronounce every word accurately**, **switch speaking styles across content**, **remain stable over tens of minutes**, and **support dialogue, role‑play, and real‑time interaction**, a single TTS model is often not enough. The **MOSS‑TTS Family** breaks the workflow into five production‑ready models that can be used independently or composed into a complete pipeline.
63
 
64
- - **MOSS‑TTS**: The flagship production model featuring high fidelity and optimal zero-shot voice cloning. It supports **long-speech generation**, **fine-grained control over Pinyin, phonemes, and duration**, as well as **multilingual/code-switched synthesis**.
65
- - **MOSS‑TTSD**: A spoken dialogue generation model for expressive, multi-speaker, and ultra-long dialogues. The new **v1.0 version** achieves **industry-leading performance on objective metrics** and **outperformed top closed-source models like Doubao and Gemini 2.5-pro** in subjective evaluations. You can visit the [MOSS-TTSD repository](https://github.com/OpenMOSS/MOSS-TTSD) for details.
66
- - **MOSS‑VoiceGenerator**: An open-source voice design model capable of generating diverse voices and styles directly from text prompts, **without any reference speech**. It unifies voice design, style control, and synthesis, functioning independently or as a design layer for downstream TTS. Its performance **surpasses other top-tier voice design models in arena ratings**.
67
- - **MOSS‑TTS‑Realtime**: A multi-turn context-aware model for real-time voice agents. It uses incremental synthesis to ensure natural and coherent replies, making it **ideal for building low-latency voice agents when paired with text models**. The TTFB (Time To First Byte) of MOSS-TTS-Realtime reaches 180 ms, and the $T_{\text{LLM-first-sentence}} + T_{\text{MOSS-TTS-Realtime-TTFB}}$ is 377 ms.
68
- - **MOSS‑SoundEffect**: A content creation model specialized in **sound effect generation** with wide category coverage and controllable duration. It generates audio for natural environments, urban scenes, biological sounds, human actions, and musical fragments, suitable for film, games, and interactive experiences.
69
-
70
-
71
- ## Model Architecture
72
-
73
- We train **MossTTSDelay** and **MossTTSLocal** as complementary baselines under one training/evaluation setup: **Delay** emphasizes long-context stability, inference speed, and production readiness, while **Local** emphasizes lightweight flexibility and strong objective performance for streaming-oriented systems. Together they provide reproducible references for deployment and research.
74
-
75
- **MossTTSRealtime** is not a third comparison baseline; it is a capability-driven design for voice agents. By modeling multi-turn context from both prior text and user acoustics, it delivers low-latency streaming speech that stays coherent and voice-consistent across turns.
76
 
77
 
78
- | Architecture | Core Mechanism | Arch Details |
79
- |---|---|---|
80
- | `MossTTSDelay` | Multi‑head parallel RVQ prediction with delay‑pattern scheduling | [![Arch Details](https://img.shields.io/badge/Model%20Card-View-blue?logo=markdown)](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_delay/README.md) |
81
- | `MossTTSLocal` | Time‑synchronous RVQ blocks with a depth transformer | [![Arch Details](https://img.shields.io/badge/Model%20Card-View-blue?logo=markdown)](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_local/README.md) |
82
- | `MossTTSRealtime` | Hierarchical text–audio inputs for realtime synthesis | [![Arch Details](https://img.shields.io/badge/Model%20Card-View-blue?logo=markdown)](https://github.com/OpenMOSS/MOSS-TTS/blob/main/moss_tts_realtime/README.md) |
83
-
84
  ## Released Models
85
 
86
-
87
- | Model | Architecture | Size | Model Card | Hugging Face | ModelScope |
88
- |---|---|---:|---|---|---|
89
- | **MOSS-TTS** | `MossTTSDelay` | 8B | [![Model Card](https://img.shields.io/badge/Model%20Card-View-blue?logo=markdown)](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_tts_model_card.md) | [![Hugging Face](https://img.shields.io/badge/Huggingface-Model-orange?logo=huggingface)](https://huggingface.co/OpenMOSS-Team/MOSS-TTS) | [![ModelScope](https://img.shields.io/badge/ModelScope-Model-lightgrey?logo=modelscope)](https://modelscope.cn/models/openmoss/MOSS-TTS) |
90
- | | `MossTTSLocal` | 1.7B | [![Model Card](https://img.shields.io/badge/Model%20Card-View-blue?logo=markdown)](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_tts_model_card.md) | [![Hugging Face](https://img.shields.io/badge/Huggingface-Model-orange?logo=huggingface)](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Local-Transformer) | [![ModelScope](https://img.shields.io/badge/ModelScope-Model-lightgrey?logo=modelscope)](https://modelscope.cn/models/openmoss/MOSS-TTS-Local-Transformer) |
91
- | **MOSS‑TTSD‑V1.0** | `MossTTSDelay` | 8B | [![Model Card](https://img.shields.io/badge/Model%20Card-View-blue?logo=markdown)](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_ttsd_model_card.md) | [![Hugging Face](https://img.shields.io/badge/Huggingface-Model-orange?logo=huggingface)](https://huggingface.co/OpenMOSS-Team/MOSS-TTSD-v1.0) | [![ModelScope](https://img.shields.io/badge/ModelScope-Model-lightgrey?logo=modelscope)](https://modelscope.cn/models/openmoss/MOSS-TTSD-v1.0) |
92
- | **MOSS‑VoiceGenerator** | `MossTTSDelay` | 1.7B | [![Model Card](https://img.shields.io/badge/Model%20Card-View-blue?logo=markdown)](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_voice_generator_model_card.md) | [![Hugging Face](https://img.shields.io/badge/Huggingface-Model-orange?logo=huggingface)](https://huggingface.co/OpenMOSS-Team/MOSS-VoiceGenerator) | [![ModelScope](https://img.shields.io/badge/ModelScope-Model-lightgrey?logo=modelscope)](https://modelscope.cn/models/openmoss/MOSS-VoiceGenerator) |
93
- | **MOSS‑SoundEffect** | `MossTTSDelay` | 8B | [![Model Card](https://img.shields.io/badge/Model%20Card-View-blue?logo=markdown)](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_sound_effect_model_card.md) | [![Hugging Face](https://img.shields.io/badge/Huggingface-Model-orange?logo=huggingface)](https://huggingface.co/OpenMOSS-Team/MOSS-SoundEffect) | [![ModelScope](https://img.shields.io/badge/ModelScope-Model-lightgrey?logo=modelscope)](https://modelscope.cn/models/openmoss/MOSS-SoundEffect) |
94
- | **MOSS‑TTS‑Realtime** | `MossTTSRealtime` | 1.7B | [![Model Card](https://img.shields.io/badge/Model%20Card-View-blue?logo=markdown)](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_tts_realtime_model_card.md) | [![Hugging Face](https://img.shields.io/badge/Huggingface-Model-orange?logo=huggingface)](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Realtime) | [![ModelScope](https://img.shields.io/badge/ModelScope-Model-lightgrey?logo=modelscope)](https://modelscope.cn/models/openmoss/MOSS-TTS-Realtime) |
95
 
96
  ## Supported Languages
97
 
@@ -101,12 +86,13 @@ MOSS-TTS, MOSS-TTSD and MOSS-TTS-Realtime currently supports **20 languages**:
101
  |---|---|---|---|---|---|---|---|---|
102
  | Chinese | zh | 🇨🇳 | English | en | 🇺🇸 | German | de | 🇩🇪 |
103
  | Spanish | es | 🇪🇸 | French | fr | 🇫🇷 | Japanese | ja | 🇯🇵 |
104
- | Italian | it | 🇮🇹 | Hungarian | hu | 🇭🇺 | Korean | ko | 🇰🇷 |
105
  | Russian | ru | 🇷🇺 | Persian (Farsi) | fa | 🇮🇷 | Arabic | ar | 🇸🇦 |
106
  | Polish | pl | 🇵🇱 | Portuguese | pt | 🇵🇹 | Czech | cs | 🇨🇿 |
107
- | Danish | da | 🇩🇰 | Swedish | sv | 🇸🇪 | | | |
108
  | Greek | el | 🇬🇷 | Turkish | tr | 🇹🇷 | | | |
109
 
 
110
  # MOSS-TTS
111
  ## 1. Overview
112
  ### 1.1 TTS Family Positioning
@@ -245,6 +231,27 @@ torch.backends.cuda.enable_flash_sdp(True)
245
  torch.backends.cuda.enable_mem_efficient_sdp(True)
246
  torch.backends.cuda.enable_math_sdp(True)
247
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
248
  pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-TTS-Local-Transformer"
249
  device = "cuda" if torch.cuda.is_available() else "cpu"
250
  dtype = torch.bfloat16 if device == "cuda" else torch.float32
@@ -339,6 +346,25 @@ model = AutoModel.from_pretrained(
339
  ).to(device)
340
  model.eval()
341
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
342
  batch_size = 1
343
 
344
  save_dir = Path(f"inference_root_moss_tts_local_transformer_generation")
@@ -354,7 +380,7 @@ with torch.no_grad():
354
  outputs = model.generate(
355
  input_ids=input_ids,
356
  attention_mask=attention_mask,
357
- max_new_tokens=4096,
358
  )
359
 
360
  for message in processor.decode(outputs):
@@ -362,6 +388,7 @@ with torch.no_grad():
362
  out_path = save_dir / f"sample{sample_idx}.wav"
363
  sample_idx += 1
364
  torchaudio.save(out_path, audio.unsqueeze(0), processor.model_config.sampling_rate)
 
365
  ```
366
 
367
  ### Continuation + Voice Cloning (Prefix Audio + Text)
@@ -381,6 +408,27 @@ torch.backends.cuda.enable_flash_sdp(True)
381
  torch.backends.cuda.enable_mem_efficient_sdp(True)
382
  torch.backends.cuda.enable_math_sdp(True)
383
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
384
  pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-TTS-Local-Transformer"
385
  device = "cuda" if torch.cuda.is_available() else "cpu"
386
  dtype = torch.bfloat16 if device == "cuda" else torch.float32
@@ -447,6 +495,25 @@ model = AutoModel.from_pretrained(
447
  ).to(device)
448
  model.eval()
449
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
450
  batch_size = 1
451
 
452
  save_dir = Path("inference_root_moss_tts_local_transformer_continuation")
@@ -462,7 +529,7 @@ with torch.no_grad():
462
  outputs = model.generate(
463
  input_ids=input_ids,
464
  attention_mask=attention_mask,
465
- max_new_tokens=4096,
466
  )
467
 
468
  for message in processor.decode(outputs):
@@ -470,6 +537,7 @@ with torch.no_grad():
470
  out_path = save_dir / f"sample{sample_idx}.wav"
471
  sample_idx += 1
472
  torchaudio.save(out_path, audio.unsqueeze(0), processor.model_config.sampling_rate)
 
473
  ```
474
 
475
 
@@ -572,31 +640,30 @@ print(model_input_text)
572
 
573
  ## 3. Evaluation
574
  MOSS-TTS achieved state-of-the-art results on the open-source zero-shot TTS benchmark Seed-TTS-eval, not only surpassing all open-source models but also rivaling the most powerful closed-source models.
575
- | Model | Params | Open‑source | EN WER (%) ↓ | EN SIM (%) ↑ | ZH CER (%) ↓ | ZH SIM (%) ↑ |
 
576
  |---|---:|:---:|---:|---:|---:|---:|
577
  | DiTAR | 0.6B | ❌ | 1.69 | 73.5 | 1.02 | 75.3 |
578
- | FishAudioS1 | 4B | ❌ | 1.72 | 62.57 | 1.22 | 72.1 |
579
- | CosyVoice3 | 1.5B | ❌ | 2.22 | 72 | 1.12 | 78.1 |
580
- | Seed‑TTS | | ❌ | 2.25 | 76.2 | 1.12 | 79.6 |
581
- | MiniMax‑Speech | | ❌ | 1.65 | 69.2 | 0.83 | 78.3 |
582
  | | | | | | | |
583
  | CosyVoice | 0.3B | ✅ | 4.29 | 60.9 | 3.63 | 72.3 |
584
  | CosyVoice2 | 0.5B | ✅ | 3.09 | 65.9 | 1.38 | 75.7 |
585
  | CosyVoice3 | 0.5B | ✅ | 2.02 | 71.8 | 1.16 | 78 |
586
- | F5‑TTS | 0.3B | ✅ | 2 | 67 | 1.53 | 76 |
 
587
  | SparkTTS | 0.5B | ✅ | 3.14 | 57.3 | 1.54 | 66 |
588
  | FireRedTTS | 0.5B | ✅ | 3.82 | 46 | 1.51 | 63.5 |
589
- | FireRedTTS2 | 1.5B | ✅ | 1.95 | 66.5 | 1.14 | 73.6 |
590
- | Qwen2.5Omni | 7B | ✅ | 2.72 | 63.2 | 1.7 | 75.2 |
591
- | FishAudioS1mini | 0.5B | ✅ | 1.94 | 55 | 1.18 | 68.5 |
592
  | IndexTTS2 | 1.5B | ✅ | 2.23 | 70.6 | 1.03 | 76.5 |
593
  | VibeVoice | 1.5B | ✅ | 3.04 | 68.9 | 1.16 | 74.4 |
594
- | HiggsAudiov2 | 3B | ✅ | 2.44 | 67.7 | 1.5 | 74 |
595
- | GLM-TTS | 1.5B | ✅ | 2.23 | 67.2 | 1.03 | 76.1 |
596
- | GLM-TTS-RL | 1.5B | ✅ | 1.91 | 68.1 | **0.89** | 76.4 |
597
- | VoxCPM | 0.5B | ✅ | 1.85 | 72.9 | 0.93 | 77.2 |
598
- | Qwen3‑TTS | 0.6B | ✅ | 1.68 | 70.39 | 1.23 | 76.4 |
599
- | Qwen3‑TTS | 1.7B | ✅ | **1.5** | 71.45 | 1.33 | 76.72 |
600
  | | | | | | | |
601
- | **MossTTSDelay** | **8B** | ✅ | 1.84 | 70.86 | 1.37 | 76.98 |
602
- | **MossTTSLocal** | **1.7B** | ✅ | 1.93 | **73.28** | 1.44 | **79.62** |
 
39
  <a href="https://github.com/OpenMOSS/MOSS-TTS/tree/main"><img src="https://img.shields.io/badge/Project%20Page-GitHub-blue"></a>
40
  <a href="https://modelscope.cn/collections/OpenMOSS-Team/MOSS-TTS"><img src="https://img.shields.io/badge/ModelScope-Models-lightgrey?logo=modelscope&amp"></a>
41
  <a href="https://mosi.cn/#models"><img src="https://img.shields.io/badge/Blog-View-blue?logo=internet-explorer&amp"></a>
42
+ <a href="https://github.com/OpenMOSS/MOSS-TTS"><img src="https://img.shields.io/badge/Arxiv-Coming%20soon-red?logo=arxiv&amp"></a>
43
 
44
  <a href="https://studio.mosi.cn"><img src="https://img.shields.io/badge/AIStudio-Try-green?logo=internet-explorer&amp"></a>
45
  <a href="https://studio.mosi.cn/docs/moss-tts"><img src="https://img.shields.io/badge/API-Docs-00A3FF?logo=fastapi&amp"></a>
 
47
  <a href="https://discord.gg/fvm5TaWjU3"><img src="https://img.shields.io/badge/Discord-Join-5865F2?logo=discord&amp"></a>
48
  </div>
49
 
 
50
  ## Overview
51
  MOSS‑TTS Family is an open‑source **speech and sound generation model family** from [MOSI.AI](https://mosi.cn/#hero) and the [OpenMOSS team](https://www.open-moss.com/). It is designed for **high‑fidelity**, **high‑expressiveness**, and **complex real‑world scenarios**, covering stable long‑form speech, multi‑speaker dialogue, voice/character design, environmental sound effects, and real‑time streaming TTS.
52
 
 
60
 
61
  When a single piece of audio needs to **sound like a real person**, **pronounce every word accurately**, **switch speaking styles across content**, **remain stable over tens of minutes**, and **support dialogue, role‑play, and real‑time interaction**, a single TTS model is often not enough. The **MOSS‑TTS Family** breaks the workflow into five production‑ready models that can be used independently or composed into a complete pipeline.
62
 
63
+ - **MOSS‑TTS**: MOSS-TTS is the flagship production TTS foundation model, centered on high-fidelity zero-shot voice cloning with controllable long-form synthesis, pronunciation, and multilingual/code-switched speech. It serves as the core engine for scalable narration, dubbing, and voice-driven products.
64
+ - **MOSS‑TTSD**: MOSS-TTSD is a production long-form dialogue model for expressive multi-speaker conversational audio at scale. It supports long-duration continuity, turn-taking control, and zero-shot voice cloning from short references for podcasts, audiobooks, commentary, dubbing, and entertainment dialogue.
65
+ - **MOSS‑VoiceGenerator**: MOSS-VoiceGenerator is an open-source voice design model that creates speaker timbres directly from free-form text, without reference audio. It unifies timbre design, style control, and content synthesis, and can be used standalone or as a voice-design layer for downstream TTS.
66
+ - **MOSS‑SoundEffect**: MOSS-SoundEffect is a high-fidelity text-to-sound model with broad category coverage and controllable duration for real content production. It generates stable audio from prompts across ambience, urban scenes, creatures, human actions, and music-like clips for film, games, interactive media, and data synthesis.
67
+ - **MOSS‑TTS‑Realtime**: MOSS-TTS-Realtime is a context-aware, multi-turn streaming TTS model for real-time voice agents. By conditioning on dialogue history across both text and prior user acoustics, it delivers low-latency synthesis with coherent, consistent voice responses across turns.
 
 
 
 
 
 
 
68
 
69
 
 
 
 
 
 
 
70
  ## Released Models
71
 
72
+ | Model | Architecture | Size | Model Card | Hugging Face |
73
+ |---|---|---:|---|---|
74
+ | **MOSS-TTS** | MossTTSDelay | 8B | [moss_tts_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_tts_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS) |
75
+ | | MossTTSLocal | 1.7B | [moss_tts_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_tts_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Local-Transformer) |
76
+ | **MOSS‑TTSD‑V1.0** | MossTTSDelay | 8B | [moss_ttsd_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_ttsd_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTSD-v1.0) |
77
+ | **MOSS‑VoiceGenerator** | MossTTSDelay | 1.7B | [moss_voice_generator_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_voice_generator_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-Voice-Generator) |
78
+ | **MOSS‑SoundEffect** | MossTTSDelay | 8B | [moss_sound_effect_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_sound_effect_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-SoundEffect) |
79
+ | **MOSS‑TTS‑Realtime** | MossTTSRealtime | 1.7B | [moss_tts_realtime_model_card.md](https://github.com/OpenMOSS/MOSS-TTS/blob/main/docs/moss_tts_realtime_model_card.md) | 🤗 [Huggingface](https://huggingface.co/OpenMOSS-Team/MOSS-TTS-Realtime) |
 
80
 
81
  ## Supported Languages
82
 
 
86
  |---|---|---|---|---|---|---|---|---|
87
  | Chinese | zh | 🇨🇳 | English | en | 🇺🇸 | German | de | 🇩🇪 |
88
  | Spanish | es | 🇪🇸 | French | fr | 🇫🇷 | Japanese | ja | 🇯🇵 |
89
+ | Italian | it | 🇮🇹 | Hebrew | he | 🇮🇱 | Korean | ko | 🇰🇷 |
90
  | Russian | ru | 🇷🇺 | Persian (Farsi) | fa | 🇮🇷 | Arabic | ar | 🇸🇦 |
91
  | Polish | pl | 🇵🇱 | Portuguese | pt | 🇵🇹 | Czech | cs | 🇨🇿 |
92
+ | Danish | da | 🇩🇰 | Swedish | sv | 🇸🇪 | Hungarian | hu | 🇭🇺 |
93
  | Greek | el | 🇬🇷 | Turkish | tr | 🇹🇷 | | | |
94
 
95
+
96
  # MOSS-TTS
97
  ## 1. Overview
98
  ### 1.1 TTS Family Positioning
 
231
  torch.backends.cuda.enable_mem_efficient_sdp(True)
232
  torch.backends.cuda.enable_math_sdp(True)
233
 
234
+ class DelayGenerationConfig(GenerationConfig):
235
+ def __init__(self, **kwargs):
236
+ super().__init__(**kwargs)
237
+ self.layers = kwargs.get("layers", [{} for _ in range(32)])
238
+ self.do_samples = kwargs.get("do_samples", None)
239
+ self.n_vq_for_inference = 32
240
+
241
+ def initial_config(tokenizer, model_name_or_path):
242
+ generation_config = DelayGenerationConfig.from_pretrained(model_name_or_path)
243
+ generation_config.pad_token_id = tokenizer.pad_token_id
244
+ generation_config.eos_token_id = 151653
245
+ generation_config.max_new_tokens = 1000000
246
+ generation_config.temperature = 1.0
247
+ generation_config.top_p = 0.95
248
+ generation_config.top_k = 100
249
+ generation_config.repetition_penalty = 1.1
250
+ generation_config.use_cache = True
251
+ generation_config.do_sample = False
252
+ return generation_config
253
+
254
+
255
  pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-TTS-Local-Transformer"
256
  device = "cuda" if torch.cuda.is_available() else "cpu"
257
  dtype = torch.bfloat16 if device == "cuda" else torch.float32
 
346
  ).to(device)
347
  model.eval()
348
 
349
+ generation_config = initial_config(processor.tokenizer, pretrained_model_name_or_path)
350
+ generation_config.n_vq_for_inference = model.channels - 1
351
+ generation_config.do_samples = [True] * model.channels
352
+ generation_config.layers = [
353
+ {
354
+ "repetition_penalty": 1.0,
355
+ "temperature": 1.5,
356
+ "top_p": 1.0,
357
+ "top_k": 50
358
+ }
359
+ ] + [
360
+ {
361
+ "repetition_penalty": 1.1,
362
+ "temperature": 1.0,
363
+ "top_p": 0.95,
364
+ "top_k": 50
365
+ }
366
+ ] * (model.channels - 1)
367
+
368
  batch_size = 1
369
 
370
  save_dir = Path(f"inference_root_moss_tts_local_transformer_generation")
 
380
  outputs = model.generate(
381
  input_ids=input_ids,
382
  attention_mask=attention_mask,
383
+ generation_config=generation_config
384
  )
385
 
386
  for message in processor.decode(outputs):
 
388
  out_path = save_dir / f"sample{sample_idx}.wav"
389
  sample_idx += 1
390
  torchaudio.save(out_path, audio.unsqueeze(0), processor.model_config.sampling_rate)
391
+
392
  ```
393
 
394
  ### Continuation + Voice Cloning (Prefix Audio + Text)
 
408
  torch.backends.cuda.enable_mem_efficient_sdp(True)
409
  torch.backends.cuda.enable_math_sdp(True)
410
 
411
+ class DelayGenerationConfig(GenerationConfig):
412
+ def __init__(self, **kwargs):
413
+ super().__init__(**kwargs)
414
+ self.layers = kwargs.get("layers", [{} for _ in range(32)])
415
+ self.do_samples = kwargs.get("do_samples", None)
416
+ self.n_vq_for_inference = 32
417
+
418
+ def initial_config(tokenizer, model_name_or_path):
419
+ generation_config = DelayGenerationConfig.from_pretrained(model_name_or_path)
420
+ generation_config.pad_token_id = tokenizer.pad_token_id
421
+ generation_config.eos_token_id = 151653
422
+ generation_config.max_new_tokens = 1000000
423
+ generation_config.temperature = 1.0
424
+ generation_config.top_p = 0.95
425
+ generation_config.top_k = 100
426
+ generation_config.repetition_penalty = 1.1
427
+ generation_config.use_cache = True
428
+ generation_config.do_sample = False
429
+ return generation_config
430
+
431
+
432
  pretrained_model_name_or_path = "OpenMOSS-Team/MOSS-TTS-Local-Transformer"
433
  device = "cuda" if torch.cuda.is_available() else "cpu"
434
  dtype = torch.bfloat16 if device == "cuda" else torch.float32
 
495
  ).to(device)
496
  model.eval()
497
 
498
+ generation_config = initial_config(processor.tokenizer, pretrained_model_name_or_path)
499
+ generation_config.n_vq_for_inference = model.channels - 1
500
+ generation_config.do_samples = [True] * model.channels
501
+ generation_config.layers = [
502
+ {
503
+ "repetition_penalty": 1.0,
504
+ "temperature": 1.5,
505
+ "top_p": 1.0,
506
+ "top_k": 50
507
+ }
508
+ ] + [
509
+ {
510
+ "repetition_penalty": 1.1,
511
+ "temperature": 1.0,
512
+ "top_p": 0.95,
513
+ "top_k": 50
514
+ }
515
+ ] * (model.channels - 1)
516
+
517
  batch_size = 1
518
 
519
  save_dir = Path("inference_root_moss_tts_local_transformer_continuation")
 
529
  outputs = model.generate(
530
  input_ids=input_ids,
531
  attention_mask=attention_mask,
532
+ generation_config=generation_config
533
  )
534
 
535
  for message in processor.decode(outputs):
 
537
  out_path = save_dir / f"sample{sample_idx}.wav"
538
  sample_idx += 1
539
  torchaudio.save(out_path, audio.unsqueeze(0), processor.model_config.sampling_rate)
540
+
541
  ```
542
 
543
 
 
640
 
641
  ## 3. Evaluation
642
  MOSS-TTS achieved state-of-the-art results on the open-source zero-shot TTS benchmark Seed-TTS-eval, not only surpassing all open-source models but also rivaling the most powerful closed-source models.
643
+
644
+ | Model | Params | Open-source | EN WER (%) ↓ | EN SIM (%) ↑ | ZH CER (%) ↓ | ZH SIM (%) ↑ |
645
  |---|---:|:---:|---:|---:|---:|---:|
646
  | DiTAR | 0.6B | ❌ | 1.69 | 73.5 | 1.02 | 75.3 |
647
+ | FishAudio-S1 | 4B | ❌ | 1.72 | 62.57 | 1.22 | 72.1 |
648
+ | Seed-TTS | | ❌ | 2.25 | 76.2 | 1.12 | 79.6 |
649
+ | MiniMax-Speech | | ❌ | 1.65 | 69.2 | 0.83 | 78.3 |
 
650
  | | | | | | | |
651
  | CosyVoice | 0.3B | ✅ | 4.29 | 60.9 | 3.63 | 72.3 |
652
  | CosyVoice2 | 0.5B | ✅ | 3.09 | 65.9 | 1.38 | 75.7 |
653
  | CosyVoice3 | 0.5B | ✅ | 2.02 | 71.8 | 1.16 | 78 |
654
+ | CosyVoice3 | 1.5B | ✅ | 2.22 | 72 | 1.12 | 78.1 |
655
+ | F5-TTS | 0.3B | ✅ | 2 | 67 | 1.53 | 76 |
656
  | SparkTTS | 0.5B | ✅ | 3.14 | 57.3 | 1.54 | 66 |
657
  | FireRedTTS | 0.5B | ✅ | 3.82 | 46 | 1.51 | 63.5 |
658
+ | FireRedTTS-2 | 1.5B | ✅ | 1.95 | 66.5 | 1.14 | 73.6 |
659
+ | Qwen2.5-Omni | 7B | ✅ | 2.72 | 63.2 | 1.7 | 75.2 |
660
+ | FishAudio-S1-mini | 0.5B | ✅ | 1.94 | 55 | 1.18 | 68.5 |
661
  | IndexTTS2 | 1.5B | ✅ | 2.23 | 70.6 | 1.03 | 76.5 |
662
  | VibeVoice | 1.5B | ✅ | 3.04 | 68.9 | 1.16 | 74.4 |
663
+ | HiggsAudio-v2 | 3B | ✅ | 2.44 | 67.7 | 1.5 | 74 |
664
+ | VoxCPM | 0.5B | ✅ | 1.85 | 72.9 | **0.93** | 77.2 |
665
+ | Qwen3-TTS | 0.6B | ✅ | 1.68 | 70.39 | 1.23 | 76.4 |
666
+ | Qwen3-TTS | 1.7B | ✅ | **1.5** | 71.45 | 1.33 | 76.72 |
 
 
667
  | | | | | | | |
668
+ | MossTTSDelay | 8B | ✅ | 1.79 | 71.46 | 1.32 | 77.05 |
669
+ | MossTTSLocal | 1.7B | ✅ | 1.85 | **73.42** | 1.2 | **78.82** |
modeling_moss_tts.py CHANGED
@@ -616,109 +616,6 @@ class MossTTSDelayModel(MosiTTSPretrainedModel, CustomMixin):
616
  def can_generate(self):
617
  return True
618
 
619
- def _build_generation_config(
620
- self,
621
- generation_config: Optional[GenerationConfig] = None,
622
- max_new_tokens: Optional[int] = None,
623
- text_temperature: Optional[float] = None,
624
- text_top_p: Optional[float] = None,
625
- text_top_k: Optional[int] = None,
626
- text_repetition_penalty: Optional[float] = None,
627
- audio_temperature: Optional[float] = None,
628
- audio_top_p: Optional[float] = None,
629
- audio_top_k: Optional[int] = None,
630
- audio_repetition_penalty: Optional[float] = None,
631
- n_vq_for_inference: Optional[int] = None,
632
- ) -> GenerationConfig:
633
- config = copy.deepcopy(generation_config or self.generation_config)
634
-
635
- text_temperature = 1.5 if text_temperature is None else float(text_temperature)
636
- text_top_p = 1.0 if text_top_p is None else float(text_top_p)
637
- text_top_k = 50 if text_top_k is None else int(text_top_k)
638
- text_repetition_penalty = 1.0 if text_repetition_penalty is None else float(text_repetition_penalty)
639
- audio_temperature = 1.0 if audio_temperature is None else float(audio_temperature)
640
- audio_top_p = 0.95 if audio_top_p is None else float(audio_top_p)
641
- audio_top_k = 50 if audio_top_k is None else int(audio_top_k)
642
- audio_repetition_penalty = 1.1 if audio_repetition_penalty is None else float(audio_repetition_penalty)
643
-
644
- text_do_sample = text_temperature > 0
645
- if not text_do_sample:
646
- text_temperature = 1.0
647
- audio_do_sample = audio_temperature > 0
648
- if not audio_do_sample:
649
- audio_temperature = 1.0
650
-
651
- if max_new_tokens is not None:
652
- config.max_new_tokens = int(max_new_tokens)
653
- elif getattr(config, "max_new_tokens", None) is None:
654
- config.max_new_tokens = 100000 # about 2.2 hours , can be overridden by user input, you can set to a smaller value for faster generation during debugging
655
-
656
- if getattr(config, "pad_token_id", None) is None:
657
- config.pad_token_id = self.config.pad_token_id
658
- config.eos_token_id = self.config.audio_end_token_id
659
- config.use_cache = True
660
- config.do_sample = text_do_sample or audio_do_sample
661
-
662
- resolved_n_vq = self.channels - 1 if n_vq_for_inference is None else int(n_vq_for_inference)
663
- resolved_n_vq = max(1, min(self.channels - 1, resolved_n_vq))
664
- config.n_vq_for_inference = resolved_n_vq
665
- config.do_samples = [text_do_sample] + [audio_do_sample] * (self.channels - 1)
666
- config.layers = [
667
- {
668
- "repetition_penalty": text_repetition_penalty,
669
- "temperature": text_temperature,
670
- "top_p": text_top_p,
671
- "top_k": text_top_k,
672
- }
673
- ] + [
674
- {
675
- "repetition_penalty": audio_repetition_penalty,
676
- "temperature": audio_temperature,
677
- "top_p": audio_top_p,
678
- "top_k": audio_top_k,
679
- }
680
- for _ in range(self.channels - 1)
681
- ]
682
- return config
683
-
684
- @torch.inference_mode()
685
- def generate(
686
- self,
687
- input_ids: torch.LongTensor,
688
- attention_mask: Optional[torch.Tensor] = None,
689
- generation_config: Optional[GenerationConfig] = None,
690
- max_new_tokens: Optional[int] = None,
691
- text_temperature: Optional[float] = None,
692
- text_top_p: Optional[float] = None,
693
- text_top_k: Optional[int] = None,
694
- text_repetition_penalty: Optional[int] = None,
695
- audio_temperature: Optional[float] = None,
696
- audio_top_p: Optional[float] = None,
697
- audio_top_k: Optional[int] = None,
698
- audio_repetition_penalty: Optional[float] = None,
699
- n_vq_for_inference: Optional[int] = None,
700
- **kwargs,
701
- ):
702
- resolved_generation_config = self._build_generation_config(
703
- generation_config=generation_config,
704
- max_new_tokens=max_new_tokens,
705
- text_temperature=text_temperature,
706
- text_top_p=text_top_p,
707
- text_top_k=text_top_k,
708
- text_repetition_penalty=text_repetition_penalty,
709
- audio_temperature=audio_temperature,
710
- audio_top_p=audio_top_p,
711
- audio_top_k=audio_top_k,
712
- audio_repetition_penalty=audio_repetition_penalty,
713
- n_vq_for_inference=n_vq_for_inference,
714
- )
715
- return super().generate(
716
- input_ids=input_ids,
717
- attention_mask=attention_mask,
718
- generation_config=resolved_generation_config,
719
- **kwargs,
720
- )
721
-
722
  # def tie_weights(self):
723
  # ...
724
  # for i in range(self.config.channels):
 
616
  def can_generate(self):
617
  return True
618
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
619
  # def tie_weights(self):
620
  # ...
621
  # for i in range(self.config.channels):
processing_moss_tts.py CHANGED
@@ -621,7 +621,7 @@ class MossTTSDelayProcessor(ProcessorMixin):
621
  prefix_idx = audio_end_idx
622
 
623
  if truncation:
624
- ...
625
  else:
626
  last_audio_end_idx = int(audio_end_indices[-1].item())
627
  pad_codes = torch.full(
 
621
  prefix_idx = audio_end_idx
622
 
623
  if truncation:
624
+ raise RuntimeError("Truncation generation is not supported at present")
625
  else:
626
  last_audio_end_idx = int(audio_end_indices[-1].item())
627
  pad_codes = torch.full(