Opensource Lorsas and Transcoders
AI & ML interests
LLM
Recent Activity
View all activity
Papers
BandPO: Bridging Trust Regions and Ratio Clipping via Probability-Aware Bounds for LLM Reinforcement Learning
MOSS-Audio-Tokenizer: Scaling Audio Tokenizers for Future Audio Foundation Models
A unified multimodal large language model for end-to-end speaker-attributed, time-stamped transcription.
Evaluating Agentic Backend Coding Capabilities in Real-World Development Scenarios
-
ABC-Bench: Benchmarking Agentic Backend Coding in Real-World Development
Paper ⢠2601.11077 ⢠Published ⢠65 -
OpenMOSS-Team/ABC-Bench
Viewer ⢠Updated ⢠224 ⢠172 ⢠3 -
OpenMOSS-Team/Qwen3-32B-ABC
Text Generation ⢠33B ⢠Updated ⢠5 ⢠1 -
OpenMOSS-Team/Qwen3-8B-ABC
Text Generation ⢠8B ⢠Updated ⢠5 ⢠2
[ICLR 2026] Game-RL: Synthesizing Multimodal Verifiable Game Data to Boost VLMs' General Reasoning
An Efficient Training Framework for Diffusion Language Models
-
World Modeling Makes a Better Planner: Dual Preference Optimization for Embodied Task Planning
Paper ⢠2503.10480 ⢠Published ⢠56 -
Unleashing Embodied Task Planning Ability in LLMs via Reinforcement Learning
Paper ⢠2506.23127 ⢠Published ⢠1 -
World-aware Planning Narratives Enhance Large Vision-Language Model Planner
Paper ⢠2506.21230 ⢠Published -
OpenMOSS-Team/Embodied_R1-ScienceWorld
8B ⢠Updated ⢠3
The MHA2MLA model published in the paper "Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-Based LLMs"
-
OpenMOSS-Team/SmolLM-135M-MLA-d_kv_8-refactor
Text Generation ⢠0.1B ⢠Updated ⢠4 -
OpenMOSS-Team/SmolLM-135M-MLA-d_kv_32-refactor
Text Generation ⢠0.1B ⢠Updated ⢠1 -
OpenMOSS-Team/SmolLM-135M-MLA-d_kv_16-refactor
Text Generation ⢠0.1B ⢠Updated ⢠4 -
OpenMOSS-Team/SmolLM-360M-MLA-d_kv_8-refactor
Text Generation ⢠0.3B ⢠Updated
-
OpenMOSS-Team/moss-moon-003-sft-plugin
Text Generation ⢠Updated ⢠25 ⢠69 -
OpenMOSS-Team/moss-moon-003-sft
Text Generation ⢠Updated ⢠63 ⢠127 -
OpenMOSS-Team/moss-moon-003-base
Text Generation ⢠Updated ⢠136 ⢠131 -
OpenMOSS-Team/moss-moon-003-sft-int4
Text Generation ⢠Updated ⢠34 ⢠40
-
OpenMOSS-Team/MOSS-TTS
Text-to-Speech ⢠8B ⢠Updated ⢠106k ⢠340 -
OpenMOSS-Team/MOSS-TTS-Local-Transformer
Text-to-Speech ⢠3B ⢠Updated ⢠58.7k ⢠20 -
OpenMOSS-Team/MOSS-TTS-Realtime
Text-to-Speech ⢠2B ⢠Updated ⢠90.9k ⢠65 -
OpenMOSS-Team/MOSS-Audio-Tokenizer
Feature Extraction ⢠2B ⢠Updated ⢠83.4k ⢠37
-
OpenMOSS-Team/MOSS-TTSD-v1.0
Text-to-Speech ⢠8B ⢠Updated ⢠23.1k ⢠49 -
OpenMOSS-Team/MOSS-TTSD-v0.7
Text-to-Speech ⢠2B ⢠Updated ⢠296 ⢠17 -
OpenMOSS-Team/MOSS-TTSD-v0.5
Text-to-Speech ⢠2B ⢠Updated ⢠1.39k ⢠53 -
OpenMOSS-Team/MOSS-TTSD-v0
Text-to-Speech ⢠2B ⢠Updated ⢠4 ⢠27
True Speech-to-Speech Langugage Model
First Omni-modal Future Forecasting Benchmark
https://github.com/OpenMOSS/FRoM-W1
Proactive Robot Manipulation in Omni-modal Context
Open source weights of Lorsa modules introduced in "Towards Understanding the Nature of Attention with Low-Rank Sparse Decomposition".
The MHA2MLA model published in the paper "Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-Based LLMs"
-
Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs
Paper ⢠2502.14837 ⢠Published ⢠3 -
OpenMOSS-Team/Llama-2-7B-MLA-d_kv_16
Text Generation ⢠6B ⢠Updated ⢠1 -
OpenMOSS-Team/Llama-2-7B-MLA-d_kv_32
Text Generation ⢠6B ⢠Updated ⢠1 -
OpenMOSS-Team/Llama-2-7B-MLA-d_kv_64
Text Generation ⢠7B ⢠Updated ⢠4
Opensource Lorsas and Transcoders
-
OpenMOSS-Team/MOSS-TTS
Text-to-Speech ⢠8B ⢠Updated ⢠106k ⢠340 -
OpenMOSS-Team/MOSS-TTS-Local-Transformer
Text-to-Speech ⢠3B ⢠Updated ⢠58.7k ⢠20 -
OpenMOSS-Team/MOSS-TTS-Realtime
Text-to-Speech ⢠2B ⢠Updated ⢠90.9k ⢠65 -
OpenMOSS-Team/MOSS-Audio-Tokenizer
Feature Extraction ⢠2B ⢠Updated ⢠83.4k ⢠37
-
OpenMOSS-Team/MOSS-TTSD-v1.0
Text-to-Speech ⢠8B ⢠Updated ⢠23.1k ⢠49 -
OpenMOSS-Team/MOSS-TTSD-v0.7
Text-to-Speech ⢠2B ⢠Updated ⢠296 ⢠17 -
OpenMOSS-Team/MOSS-TTSD-v0.5
Text-to-Speech ⢠2B ⢠Updated ⢠1.39k ⢠53 -
OpenMOSS-Team/MOSS-TTSD-v0
Text-to-Speech ⢠2B ⢠Updated ⢠4 ⢠27
A unified multimodal large language model for end-to-end speaker-attributed, time-stamped transcription.
True Speech-to-Speech Langugage Model
Evaluating Agentic Backend Coding Capabilities in Real-World Development Scenarios
-
ABC-Bench: Benchmarking Agentic Backend Coding in Real-World Development
Paper ⢠2601.11077 ⢠Published ⢠65 -
OpenMOSS-Team/ABC-Bench
Viewer ⢠Updated ⢠224 ⢠172 ⢠3 -
OpenMOSS-Team/Qwen3-32B-ABC
Text Generation ⢠33B ⢠Updated ⢠5 ⢠1 -
OpenMOSS-Team/Qwen3-8B-ABC
Text Generation ⢠8B ⢠Updated ⢠5 ⢠2
First Omni-modal Future Forecasting Benchmark
[ICLR 2026] Game-RL: Synthesizing Multimodal Verifiable Game Data to Boost VLMs' General Reasoning
https://github.com/OpenMOSS/FRoM-W1
An Efficient Training Framework for Diffusion Language Models
Proactive Robot Manipulation in Omni-modal Context
-
World Modeling Makes a Better Planner: Dual Preference Optimization for Embodied Task Planning
Paper ⢠2503.10480 ⢠Published ⢠56 -
Unleashing Embodied Task Planning Ability in LLMs via Reinforcement Learning
Paper ⢠2506.23127 ⢠Published ⢠1 -
World-aware Planning Narratives Enhance Large Vision-Language Model Planner
Paper ⢠2506.21230 ⢠Published -
OpenMOSS-Team/Embodied_R1-ScienceWorld
8B ⢠Updated ⢠3
Open source weights of Lorsa modules introduced in "Towards Understanding the Nature of Attention with Low-Rank Sparse Decomposition".
The MHA2MLA model published in the paper "Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-Based LLMs"
-
OpenMOSS-Team/SmolLM-135M-MLA-d_kv_8-refactor
Text Generation ⢠0.1B ⢠Updated ⢠4 -
OpenMOSS-Team/SmolLM-135M-MLA-d_kv_32-refactor
Text Generation ⢠0.1B ⢠Updated ⢠1 -
OpenMOSS-Team/SmolLM-135M-MLA-d_kv_16-refactor
Text Generation ⢠0.1B ⢠Updated ⢠4 -
OpenMOSS-Team/SmolLM-360M-MLA-d_kv_8-refactor
Text Generation ⢠0.3B ⢠Updated
The MHA2MLA model published in the paper "Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-Based LLMs"
-
Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs
Paper ⢠2502.14837 ⢠Published ⢠3 -
OpenMOSS-Team/Llama-2-7B-MLA-d_kv_16
Text Generation ⢠6B ⢠Updated ⢠1 -
OpenMOSS-Team/Llama-2-7B-MLA-d_kv_32
Text Generation ⢠6B ⢠Updated ⢠1 -
OpenMOSS-Team/Llama-2-7B-MLA-d_kv_64
Text Generation ⢠7B ⢠Updated ⢠4
-
OpenMOSS-Team/moss-moon-003-sft-plugin
Text Generation ⢠Updated ⢠25 ⢠69 -
OpenMOSS-Team/moss-moon-003-sft
Text Generation ⢠Updated ⢠63 ⢠127 -
OpenMOSS-Team/moss-moon-003-base
Text Generation ⢠Updated ⢠136 ⢠131 -
OpenMOSS-Team/moss-moon-003-sft-int4
Text Generation ⢠Updated ⢠34 ⢠40